首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In fault diagnosis intermittent failure models are an important tool to adequately deal with realistic failure behavior. Current model-based diagnosis approaches account for the fact that a component cj may fail intermittently by introducing a parameter gj that expresses the probability the component exhibits correct behavior. This component parameter gj, in conjunction with a priori fault probability, is used in a Bayesian framework to compute the posterior fault candidate probabilities. Usually, information on gj is not known a priori. While proper estimation of gj can be critical to diagnostic accuracy, at present, only approximations have been proposed. We present a novel framework, coined Barinel, that computes estimations of the gj as integral part of the posterior candidate probability computation using a maximum likelihood estimation approach. Barinel's diagnostic performance is evaluated for both synthetic systems, the Siemens software diagnosis benchmark, as well as for real-world programs. Our results show that our approach is superior to reasoning approaches based on classical persistent failure models, as well as previously proposed intermittent failure models.  相似文献   

2.
Dirichlet distributions are natural choices to analyse data described by frequencies or proportions since they are the simplest known distributions for such data apart from the uniform distribution. They are often used whenever proportions are involved, for example, in text-mining, image analysis, biology or as a prior of a multinomial distribution in Bayesian statistics. As the Dirichlet distribution belongs to the exponential family, its parameters can be easily inferred by maximum likelihood. Parameter estimation is usually performed with the Newton-Raphson algorithm after an initialisation step using either the moments or Ronning's methods. However this initialisation can result in parameters that lie outside the admissible region. A simple and very efficient alternative based on a maximum likelihood approximation is presented. The advantages of the presented method compared to two other methods are demonstrated on synthetic data sets as well as for a practical biological problem: the clustering of protein sequences based on their amino acid compositions.  相似文献   

3.
如何确定高维数据的固有维数是降维成功与否的关键。基于极大似然估计(MLE)的维数估计方法是一种新近出现的方法,实现简单,选择合适的近邻能取得不错的结果。但当近邻数过小或过大时,均有比较明显的偏差。其根本原因是没有考虑每个点对固有维数的不同贡献。在充分考虑数据集的分布信息之后,提出了一种改进的MLE——自适应极大似然估计(AMLE)。实验表明,无论在合成数据集还是真实数据集上,AMLE较MLE在估计准确度上均有很大的提高,对近邻数的变化也不甚敏感。  相似文献   

4.
In this paper, we consider the distributed maximum likelihood estimation (MLE) with dependent quantized data under the assumption that the structure of the joint probability density function (pdf) is known, but it contains unknown deterministic parameters. The parameters may include different vector parameters corresponding to marginal pdfs and parameters that describe the dependence of observations across sensors. Since MLE with a single quantizer is sensitive to the choice of thresholds due to the uncertainty of pdf, we concentrate on MLE with multiple groups of quantizers (which can be determined by the use of prior information or some heuristic approaches) to fend off against the risk of a poor/outlier quantizer. The asymptotic efficiency of the MLE scheme with multiple quantizers is proved under some regularity conditions and the asymptotic variance is derived to be the inverse of a weighted linear combination of Fisher information matrices based on multiple different quantizers which can be used to show the robustness of our approach. As an illustrative example, we consider an estimation problem with a bivariate non-Gaussian pdf that has applications in distributed constant false alarm rate (CFAR) detection systems. Simulations show the robustness of the proposed MLE scheme especially when the number of quantized measurements is small.  相似文献   

5.
The log-likelihood function of threshold vector error correction models is neither differentiable, nor smooth with respect to some parameters. Therefore, it is very difficult to implement maximum likelihood estimation (MLE) of the model. A new estimation method, which is based on a hybrid algorithm and MLE, is proposed to resolve this problem. The hybrid algorithm, referred to as genetic-simulated annealing, not only inherits aspects of genetic-algorithms (GAs), but also avoids premature convergence by incorporating elements of simulated annealing (SA). Simulation experiments demonstrate that the proposed method allows to estimate the parameters of larger cointegrating systems. Additionally, numerical results show that the hybrid algorithm does a better job than either SA or GA alone.  相似文献   

6.
Maximum likelihood estimation has a rich history. It has been successfully applied to many problems including dynamical system identification. Different approaches have been proposed in the time and frequency domains. In this paper we discuss the relationship between these approaches and we establish conditions under which the different formulations are equivalent for finite length data. A key point in this context is how initial (and final) conditions are considered and how they are introduced in the likelihood function.  相似文献   

7.
详细论述了基于块匹配的鲁棒运动估计算法。跟已有的基于块匹配的运动估计算法比较,首先,我们引入颜色信息来提高运动估计的准确性;其次,在更广泛的意义上运用自适应策略来减少计算量并同时保证算法的鲁棒性;最后,提出的基于预测修正的复合查找方法充分利用了物体运动的全局信息,克服了三步查找算法以及全查找算法的缺点并充分发挥它们二者的优点从而提高查找的效率和匹配精度。实验结果表明基于块匹配的鲁棒运动估计算法具有抗干扰能力强、运动估计准确、计算效率高等优点。  相似文献   

8.
In video coding, research is focused on the development of fast motion estimation (ME) algorithms while keeping the coding distortion as small as possible. It has been observed that the real world video sequences exhibit a wide range of motion content, from uniform to random, therefore if the motion characteristics of video sequences are taken into account before hand, it is possible to develop a robust motion estimation algorithm that is suitable for all kinds of video sequences. This is the basis of the proposed algorithm. The proposed algorithm involves a multistage approach that includes motion vector prediction and motion classification using the characteristics of video sequences. In the first step, spatio-temporal correlation has been used for initial search centre prediction. This strategy decreases the effect of unimodal error surface assumption and it also moves the search closer to the global minimum hence increasing the computation speed. Secondly, the homogeneity analysis helps to identify smooth and random motion. Thirdly, global minimum prediction based on unimodal error surface assumption helps to identify the proximity of global minimum. Fourthly, adaptive search pattern selection takes into account various types of motion content by dynamically switching between stationary, center biased and, uniform search patterns. Finally, the early termination of the search process is adaptive and is based on the homogeneity between the neighboring blocks.Extensive simulation results for several video sequences affirm the effectiveness of the proposed algorithm. The self-tuning property enables the algorithm to perform well for several types of benchmark sequences, yielding better video quality and less complexity as compared to other ME algorithms. Implementation of proposed algorithm in JM12.2 of H.264/AVC shows reduction in computational complexity measured in terms of encoding time while maintaining almost same bit rate and PSNR as compared to Full Search algorithm.  相似文献   

9.
A novel, computationally efficient and robust scheme for multiple initial point prediction has been proposed in this paper. A combination of spatial and temporal predictors has been used for initial motion vector prediction, determination of magnitude and direction of motion and search pattern selection. Initially three predictors from the spatio-temporal neighboring blocks are selected. If all these predictors point to the same quadrant then a simple search pattern based on the direction and magnitude of the predicted motion vector is selected. However if the predictors belong to different quadrants then we start the search from multiple initial points to get a clear idea of the location of minimum point. We have also defined local minimum elimination criteria to avoid being trapped in local minimum. In this case multiple rood search patterns are selected. The predictive search center is closer to the global minimum and thus decreases the effect of monotonic error surface assumption and its impact on the motion field. Its additional advantage is that it moves the search closer to the global minimum hence increases the computation speed. Further computational speed up has been obtained by considering the zero-motion threshold for no motion blocks. The image quality measured in terms of PSNR also shows good results.  相似文献   

10.
针对小高比立体匹配当中的亚像素精度和粘合现象问题,提出了一种基于最大似然估计的小基高比立体匹配方法。该方法首先根据混合式窗口选择策略为参考图像中的每一点确定匹配窗口;然后在视差范围内根据规范化互相关函数计算匹配代价,再利用胜者全取策略计算每一点视差;最后采用基于最大似然估计的亚像素匹配方法获得亚像素级视差。实验结果表明,该方法有效地减少了立体匹配中的粘合现象,同时获得了较高精度的亚像素视差,其平均亚像素精度可达1/20个像元。  相似文献   

11.
The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms.  相似文献   

12.
The identification of the spatially dependent parameters in Partial Differential Equations (PDEs) is important in both physics and control problems. A methodology is presented to identify spatially dependent parameters from spatio-temporal measurements. Local non-rational transfer functions are derived based on three local measurements allowing for a local estimate of the parameters. A sample Maximum Likelihood Estimator (SMLE) in the frequency domain is used, because it takes noise properties into account and allows for high accuracy consistent parameter estimation. Confidence bounds on the parameters are estimated based on the noise properties of the measurements. This method is successfully applied to the simulations of a finite difference model of a parabolic PDE with piecewise constant parameters.  相似文献   

13.
This paper studies the linear dynamic errors-in-variables problem for filtered white noise excitations. First, a frequency domain Gaussian maximum likelihood (ML) estimator is constructed that can handle discrete-time as well as continuous-time models on (a) part(s) of the unit circle or imaginary axis. Next, the ML estimates are calculated via a computationally simple and numerically stable Gauss-Newton minimization scheme. Finally, the Cramér-Rao lower bound is derived.  相似文献   

14.
A new likelihood based AR approximation is given for ARMA models. The usual algorithms for the computation of the likelihood of an ARMA model require O(n) flops per function evaluation. Using our new approximation, an algorithm is developed which requires only O(1) flops in repeated likelihood evaluations. In most cases, the new algorithm gives results identical to or very close to the exact maximum likelihood estimate (MLE). This algorithm is easily implemented in high level quantitative programming environments (QPEs) such as Mathematica, MatLab and R. In order to obtain reasonable speed, previous ARMA maximum likelihood algorithms are usually implemented in C or some other machine efficient language. With our algorithm it is easy to do maximum likelihood estimation for long time series directly in the QPE of your choice. The new algorithm is extended to obtain the MLE for the mean parameter. Simulation experiments which illustrate the effectiveness of the new algorithm are discussed. Mathematica and R packages which implement the algorithm discussed in this paper are available [McLeod, A.I., Zhang, Y., 2007. Online supplements to “Faster ARMA Maximum Likelihood Estimation”, 〈http://www.stats.uwo.ca/faculty/aim/2007/faster/〉]. Based on these package implementations, it is expected that the interested researcher would be able to implement this algorithm in other QPEs.  相似文献   

15.
A numerical maximum likelihood (ML) estimation procedure is developed for the constrained parameters of multinomial distributions. The main difficulty involved in computing the likelihood function is the precise and fast determination of the multinomial coefficients. For this the coefficients are rewritten into a telescopic product. The presented method is applied to the ML estimation of the Zipf-Mandelbrot (ZM) distribution, which provides a true model in many real-life cases. The examples discussed arise from ecological and medical observations. Based on the estimates, the hypothesis that the data is ZM distributed is tested using a chi-square test. The computer code of the presented procedure is available on request by the author.  相似文献   

16.
一种十字形运动搜索算法   总被引:2,自引:0,他引:2  
近几年,虽然运动估计算法有了很多种的快速算法,但是,运动搜索巨大的计算量依然是视频压缩速率的瓶颈,本文针对运动矢量的分布特点,提出了一种新的运动搜索算法,算法不仅结构简单,而且测试结果表明,该算法比原有DS9(dia-mondsearch)算法在搜索点数和图像质量方面有较大的提高,最好时的搜索点数只有DS算法的3/4。  相似文献   

17.
When modelling biological processes, there are always errors, uncertainties and variations present. In this paper, we consider the coefficients in the mathematical model to be random variables, whose distribution and moments are unknown a priori, and need to be determined by comparison with experimental data. A stochastic spectral representation of the parameters and the solution stochastic process is used, based on polynomial chaoses. The polynomial chaos representation generates a system of equations of the same type as the original model. The inverse problem of finding the parameters is reduced to establishing the best-fit values of the random variables that represent them, and this is done using maximum likelihood estimation. In particular, in modelling biofilm growth, there are variations, measurement errors and uncertainties in the processes. The biofilm growth model is given by a parabolic differential equation, so the polynomial chaos formulation generates a system of partial differential equations. Examples are presented.  相似文献   

18.
Robust diffusion adaptive estimation algorithms based on the maximum correntropy criterion (MCC), including adapt then combine MCC and combine then adapt MCC, are developed to deal with the distributed estimation over network in impulsive (long-tailed) noise environments. The cost functions used in distributed estimation are in general based on the mean square error (MSE) criterion, which is desirable when the measurement noise is Gaussian. In non-Gaussian situations, especially for the impulsive-noise case, MCC based methods may achieve much better performance than the MSE methods as they take into account higher order statistics of error distribution. The proposed methods can also outperform the robust diffusion least mean p-power (DLMP) and diffusion minimum error entropy (DMEE) algorithms. The mean and mean square convergence analysis of the new algorithms are also carried out.  相似文献   

19.
识别网络内部的故障链路对提升网络性能具有重要参考价值。研究了树型拓扑下基于端到端测量的故障链路诊断问题,提出一种最大伪似然估计方法估计链路先验故障概率,把树型拓扑划分为一系列具有两个叶节点的子树,并使用期望最大化(EM)算法最大化每个子树的似然函数,求出链路先验概率。仿真实验表明,该方法与现有的联立方程组求解方法估计精度相当,但是大大降低了算法时间复杂度,证明了该方法的有效性。  相似文献   

20.
A central issue in dimension reduction is choosing a sensible number of dimensions to be retained. This work demonstrates the surprising result of the asymptotic consistency of the maximum likelihood criterion for determining the intrinsic dimension of a dataset in an isotropic version of probabilistic principal component analysis (PPCA). Numerical experiments on simulated and real datasets show that the maximum likelihood criterion can actually be used in practice and outperforms existing intrinsic dimension selection criteria in various situations. This paper exhibits and outlines the limits of the maximum likelihood criterion. It leads to recommend the use of the AIC criterion in specific situations. A useful application of this work would be the automatic selection of intrinsic dimensions in mixtures of isotropic PPCA for classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号