首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We study the class of state-space models and perform maximum likelihood estimation for the model parameters. We consider a stochastic approximation expectation–maximization (SAEM) algorithm to maximize the likelihood function with the novelty of using approximate Bayesian computation (ABC) within SAEM. The task is to provide each iteration of SAEM with a filtered state of the system, and this is achieved using an ABC sampler for the hidden state, based on sequential Monte Carlo methodology. It is shown that the resulting SAEM-ABC algorithm can be calibrated to return accurate inference, and in some situations it can outperform a version of SAEM incorporating the bootstrap filter. Two simulation studies are presented, first a nonlinear Gaussian state-space model then a state-space model having dynamics expressed by a stochastic differential equation. Comparisons with iterated filtering for maximum likelihood inference, and Gibbs sampling and particle marginal methods for Bayesian inference are presented.  相似文献   

2.
This paper proposes a new step called the P-step to handle the linear or nonlinear equality constraint in addition to the conventional EM algorithm. This new step is easy to implement, first because only the first derivatives of the object function and the constraint function are necessary, and secondly, because the P-step is carried out after the conventional EM algorithm. The estimate sequence produced by our method enjoys a monotonic increase in the observed likelihood function. We apply the P-step in addition to the conventional EM algorithm to the two illustrative examples. The first example has a linear constraint function. The second has a nonlinear constraint function. We show finally that there exists a Kuhn–Tucker vector at the limit point produced by our method.  相似文献   

3.
We study a modification of the EMS algorithm in which each step of the EMS algorithm is preceded by a nonlinear smoothing step of the form , where S is the smoothing operator of the EMS algorithm. In the context of positive integral equations (à la positron emission tomography) the resulting algorithm is related to a convex minimization problem which always admits a unique smooth solution, in contrast to the unmodified maximum likelihood setup. The new algorithm has slightly stronger monotonicity properties than the original EM algorithm. This suggests that the modified EMS algorithm is actually an EM algorithm for the modified problem. The existence of a smooth solution to the modified maximum likelihood problem and the monotonicity together imply the strong convergence of the new algorithm. We also present some simulation results for the integral equation of stereology, which suggests that the new algorithm behaves roughly like the EMS algorithm. Accepted 1 April 1997  相似文献   

4.
Variance components estimation and mixed model analysis are central themes in statistics with applications in numerous scientific disciplines. Despite the best efforts of generations of statisticians and numerical analysts, maximum likelihood estimation (MLE) and restricted MLE of variance component models remain numerically challenging. Building on the minorization–maximization (MM) principle, this article presents a novel iterative algorithm for variance components estimation. Our MM algorithm is trivial to implement and competitive on large data problems. The algorithm readily extends to more complicated problems such as linear mixed models, multivariate response models possibly with missing data, maximum a posteriori estimation, and penalized estimation. We establish the global convergence of the MM algorithm to a Karush–Kuhn–Tucker point and demonstrate, both numerically and theoretically, that it converges faster than the classical EM algorithm when the number of variance components is greater than two and all covariance matrices are positive definite. Supplementary materials for this article are available online.  相似文献   

5.
We study and analyze the parameterized method in view of the acceleration of the EM algorithm. Some theoretical results and discussions on the choice of the step size improve the understanding of the method and its limits.  相似文献   

6.
A finite mixture model has been used to fit the data from heterogeneous populations to many applications. An Expectation Maximization (EM) algorithm is the most popular method to estimate parameters in a finite mixture model. A Bayesian approach is another method for fitting a mixture model. However, the EM algorithm often converges to the local maximum regions, and it is sensitive to the choice of starting points. In the Bayesian approach, the Markov Chain Monte Carlo (MCMC) sometimes converges to the local mode and is difficult to move to another mode. Hence, in this paper we propose a new method to improve the limitation of EM algorithm so that the EM can estimate the parameters at the global maximum region and to develop a more effective Bayesian approach so that the MCMC chain moves from one mode to another more easily in the mixture model. Our approach is developed by using both simulated annealing (SA) and adaptive rejection metropolis sampling (ARMS). Although SA is a well-known approach for detecting distinct modes, the limitation of SA is the difficulty in choosing sequences of proper proposal distributions for a target distribution. Since ARMS uses a piecewise linear envelope function for a proposal distribution, we incorporate ARMS into an SA approach so that we can start a more proper proposal distribution and detect separate modes. As a result, we can detect the maximum region and estimate parameters for this global region. We refer to this approach as ARMS annealing. By putting together ARMS annealing with the EM algorithm and with the Bayesian approach, respectively, we have proposed two approaches: an EM-ARMS annealing algorithm and a Bayesian-ARMS annealing approach. We compare our two approaches with traditional EM algorithm alone and Bayesian approach alone using simulation, showing that our two approaches are comparable to each other but perform better than EM algorithm alone and Bayesian approach alone. Our two approaches detect the global maximum region well and estimate the parameters in this region. We demonstrate the advantage of our approaches using an example of the mixture of two Poisson regression models. This mixture model is used to analyze a survey data on the number of charitable donations.  相似文献   

7.
The family of expectation--maximization (EM) algorithms provides a general approach to fitting flexible models for large and complex data. The expectation (E) step of EM-type algorithms is time-consuming in massive data applications because it requires multiple passes through the full data. We address this problem by proposing an asynchronous and distributed generalization of the EM called the distributed EM (DEM). Using DEM, existing EM-type algorithms are easily extended to massive data settings by exploiting the divide-and-conquer technique and widely available computing power, such as grid computing. The DEM algorithm reserves two groups of computing processes called workers and managers for performing the E step and the maximization step (M step), respectively. The samples are randomly partitioned into a large number of disjoint subsets and are stored on the worker processes. The E step of DEM algorithm is performed in parallel on all the workers, and every worker communicates its results to the managers at the end of local E step. The managers perform the M step after they have received results from a γ-fraction of the workers, where γ is a fixed constant in (0, 1]. The sequence of parameter estimates generated by the DEM algorithm retains the attractive properties of EM: convergence of the sequence of parameter estimates to a local mode and linear global rate of convergence. Across diverse simulations focused on linear mixed-effects models, the DEM algorithm is significantly faster than competing EM-type algorithms while having a similar accuracy. The DEM algorithm maintains its superior empirical performance on a movie ratings database consisting of 10 million ratings. Supplementary material for this article is available online.  相似文献   

8.
单倍型推断在现代连锁分析和关联分析中起着非常关键的作用.目前的单倍型推断方法基本上是根据基因型去推断个体的单倍型,而实际家系中某些个体的基因型经常是有部分缺失或者是完全未知的.本文给出了当家系中含有部分缺失或者完全缺失基因型个体时的单倍型推断的EM方法,并且给出了参数估计的标准差,最后通过模拟研究证实了我们的方法的可行性.  相似文献   

9.
We introduce a new algorithm, extended regularized dual averaging (XRDA), for solving regularized stochastic optimization problems, which generalizes the regularized dual averaging (RDA) method. The main novelty of the method is that it allows a flexible control of the backward step size. For instance, the backward step size used in RDA grows without bound, while for XRDA the backward step size can be kept bounded. We demonstrate experimentally that additional control over the backward step size can speed up the convergence of the algorithm while preserving desired properties of the iterates, such as sparsity. Theoretically, we show that the XRDA method achieves the same convergence rate as RDA for general convex objectives.  相似文献   

10.
The EM algorithm is a widely used methodology for penalized likelihood estimation. Provable monotonicity and convergence are the hallmarks of the EM algorithm and these properties are well established for smooth likelihood and smooth penalty functions. However, many relaxed versions of variable selection penalties are not smooth. In this paper, we introduce a new class of space alternating penalized Kullback proximal extensions of the EM algorithm for nonsmooth likelihood inference. We show that the cluster points of the new method are stationary points even when they lie on the boundary of the parameter set. We illustrate the new class of algorithms for the problems of model selection for finite mixtures of regression and of sparse image reconstruction.  相似文献   

11.
提出一种求解P*(k)阵水平线性互补问题的全牛顿内点算法,全牛顿算法的优势在于每次迭代中不需要线性搜寻.当给定适当的中心路径邻域的阈值和更新势垒参数,证明算法中心邻域的全牛顿是局部二次收敛的,最后给出算法迭代复杂性O(√n)log(n+1+k)/εμ0.  相似文献   

12.
本文在Zhang H.C.的非单调线搜索规则的基础上,设计了求解无约束最优化问题的新的非单调线搜索BFGS算法,在一定 的条件下证明了算法的线性收敛性和超线性收敛性分析.数值例子表明算法是有效的.  相似文献   

13.
A mixture approach to clustering is an important technique in cluster analysis. A mixture of multivariate multinomial distributions is usually used to analyze categorical data with latent class model. The parameter estimation is an important step for a mixture distribution. Described here are four approaches to estimating the parameters of a mixture of multivariate multinomial distributions. The first approach is an extended maximum likelihood (ML) method. The second approach is based on the well-known expectation maximization (EM) algorithm. The third approach is the classification maximum likelihood (CML) algorithm. In this paper, we propose a new approach using the so-called fuzzy class model and then create the fuzzy classification maximum likelihood (FCML) approach for categorical data. The accuracy, robustness and effectiveness of these four types of algorithms for estimating the parameters of multivariate binomial mixtures are compared using real empirical data and samples drawn from the multivariate binomial mixtures of two classes. The results show that the proposed FCML algorithm presents better accuracy, robustness and effectiveness. Overall, the FCML algorithm has the superiority over the ML, EM and CML algorithms. Thus, we recommend FCML as another good tool for estimating the parameters of mixture multivariate multinomial models.  相似文献   

14.
从多阶段、延迟回报的角度来看待CRM中的决策优化问题。以KDD98数据集为例,将邮寄序贯决策定义为一个部分可观察马尔可夫决策模型(POMDP)。提出了模型参数估计的EM算法并用MATLAB实现;用模型对数似然值、BIC统计量选择最佳模型;用向前一步预测对模型进行检验;用Incremental prune算法对模型求解。实证结果表明,POMDP模型可以很好的捕捉客户购买行为的动态变化,对客户的购买有很好的预测效果。在此基础上,说明了如何使用该模型以客户终生价值最大化为目标优化直邮策略。  相似文献   

15.
In this article, we propose a three-dimensional dwindling filter algorithm for general nonlinear programming. The envelope of the three-dimensional dwindling filter becomes thinner and thinner as the step size approaches zero so that the new filter has more flexibility for the acceptance of the trial step size. Moreover, we show that the feasibility restoration phase, which is always used in traditional filter method, is not needed. The modified limited memory Broyden-Fletcher-Goldfarb-Shanno method is employed in the algorithm, and the update matrices are positive definite when the Lagrangian function is a general convex function. Under mild conditions, the global convergence of the new algorithm is analyzed. The primary numerical experiments are reported to show effectiveness of the proposed algorithm.  相似文献   

16.
Changepoint models are widely used to model the heterogeneity of sequential data. We present a novel sequential Monte Carlo (SMC) online expectation–maximization (EM) algorithm for estimating the static parameters of such models. The SMC online EM algorithm has a cost per time which is linear in the number of particles and could be particularly important when the data is representable as a long sequence of observations, since it drastically reduces the computational requirements for implementation. We present an asymptotic analysis for the stability of the SMC estimates used in the online EM algorithm and demonstrate the performance of this scheme by using both simulated and real data originating from DNA analysis. The supplementary materials for the article are available online.  相似文献   

17.
We introduce a variable step size algorithm for the pathwise numerical approximation of solutions to stochastic ordinary differential equations. The algorithm is based on a new pair of embedded explicit Runge-Kutta methods of strong order 1.5(1.0), where the method of strong order 1.5 advances the numerical computation and the difference between approximations defined by the two methods is used for control of the local error. We show that convergence of our method is preserved though the discretization times are not stopping times any more, and further, we present numerical results which demonstrate the effectiveness of the variable step size implementation compared to a fixed step size implementation.  相似文献   

18.
Abstract

The ECM and ECME algorithms are generalizations of the EM algorithm in which the maximization (M) step is replaced by several conditional maximization (CM) steps. The order that the CM-steps are performed is trivial to change and generally affects how fast the algorithm converges. Moreover, the same order of CM-steps need not be used at each iteration and in some applications it is feasible to group two or more CM-steps into one larger CM-step. These issues also arise when implementing the Gibbs sampler, and in this article we study them in the context of fitting log-linear and random-effects models with ECM-type algorithms. We find that some standard theoretical measures of the rate of convergence can be of little use in comparing the computational time required, and that common strategies such as using a random ordering may not provide the desired effects. We also develop two algorithms for fitting random-effects models to illustrate that with careful selection of CM-steps, ECM-type algorithms can be substantially faster than the standard EM algorithm.  相似文献   

19.
We consider the inpainting problem for noisy images. It is very challenge to suppress noise when image inpainting is processed. An image patches based nonlocal variational method is proposed to simultaneously inpainting and denoising in this paper. Our approach is developed on an assumption that the small image patches should be obeyed a distribution which can be described by a high dimension Gaussian Mixture Model. By a maximum a posteriori (MAP) estimation, we formulate a new regularization term according to the log-likelihood function of the mixture model. To optimize this regularization term efficiently, we adopt the idea of the Expectation Maximization (EM) algorithm. In which, the expectation step can give an adaptive weighting function which can be regarded as a nonlocal connections among pixels. Using this fact, we built a framework for non-local image inpainting under noise. Moreover, we mathematically prove the existence of minimizer for the proposed inpainting model. By using a splitting algorithm, the proposed model are able to realize image inpainting and denoising simultaneously. Numerical results show that the proposed method can produce impressive reconstructed results when the inpainting region is rather large.  相似文献   

20.
We present a new methodology to solve discretely-constrained mathematical programs with equilibrium constraints (DC-MPECs). Typically these problems include an upper planning-level optimization with some discrete decision variables (eg, build/don’t build) as well as a lower operations-level problem often described by an optimization or nonlinear complementarity problem. This lower-level problem may also include some discrete variables. MPECs are very challenging problems to solve and the inclusion of integrality constraints makes this class of problems even more computationally difficult. We develop a new variant of the Benders algorithm combined with a heuristic procedure that decomposes the domain of the upper-level discrete variables to solve the resulting DC-MPECs. We provide convergence theory as well as a number of numerical examples, some derived from energy applications, to validate the new method. It should be noted that the convergence theory applies if the heuristic procedure correctly identifies a decomposition of the domain so that the lower-level problem's optimal value function is convex. This is challenging but our numerical results are positive.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号