首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 265 毫秒
1.
The sparse synthesis model for signals has become very popular in the last decade, leading to improved performance in many signal processing applications. This model assumes that a signal may be described as a linear combination of few columns (atoms) of a given synthesis matrix (dictionary). The Co-Sparse Analysis model is a recently introduced counterpart, whereby signals are assumed to be orthogonal to many rows of a given analysis dictionary. These rows are called the co-support.The Analysis model has already led to a series of contributions that address the pursuit problem: identifying the co-support of a corrupted signal in order to restore it. While all the existing work adopts a deterministic point of view towards the design of such pursuit algorithms, this paper introduces a Bayesian estimation point of view, starting with a random generative model for the co-sparse analysis signals. This is followed by a derivation of Oracle, Minimum-Mean-Squared-Error (MMSE), and Maximum-A-posteriori-Probability (MAP) based estimators. We present a comparison between the deterministic formulations and these estimators, drawing some connections between the two. We develop practical approximations to the MAP and MMSE estimators, and demonstrate the proposed reconstruction algorithms in several synthetic and real image experiments, showing their potential and applicability.  相似文献   

2.
When continuous predictors are present, classical Pearson and deviance goodness-of-fit tests to assess logistic model fit break down. The Hosmer-Lemeshow test can be used in these situations. While simple to perform and widely used, it does not have desirable power in many cases and provides no further information on the source of any detectable lack of fit. Tsiatis proposed a score statistic to test for covariate regional effects. While conceptually elegant, its lack of a general rule for how to partition the covariate space has, to a certain degree, limited its popularity. We propose a new method for goodness-of-fit testing that uses a very general partitioning strategy (clustering) in the covariate space and either a Pearson statistic or a score statistic. Properties of the proposed statistics are discussed, and a simulation study demonstrates increased power to detect model misspecification in a variety of settings. An application of these different methods on data from a clinical trial illustrates their use. Discussions on further improvement of the proposed tests and extending this new method to other data situations, such as ordinal response regression models are also included.  相似文献   

3.
Multivariate recurrent event data arise in many clinical and observational studies, in which subjects may experience multiple types of recurrent events. In some applications, event times can be always observed, but types for some events may be missing. In this article, a semiparametric additive rates model is proposed for analyzing multivariate recurrent event data when event categories are missing at random. A weighted estimating equation approach is developed to estimate parameters of interest, and the resulting estimators are shown to be consistent and asymptotically normal. In addition, a lack-of-fit test is presented to assess the adequacy of the model. Simulation studies demonstrate that the proposed method performs well for practical settings. An application to a platelet transfusion reaction study is provided.  相似文献   

4.
Extreme value theory is used to derive asymptotically motivated models for unusual or rare events, e.g. the upper or lower tails of a distribution. A new flexible extreme value mixture model is proposed combining a non-parametric kernel density estimator for the bulk of the distribution with an appropriate tail model. The complex uncertainties associated with threshold choice are accounted for and new insights into the impact of threshold choice on density and quantile estimates are obtained. Bayesian inference is used to account for all uncertainties and enables inclusion of expert prior information, potentially overcoming the inherent sparsity of extremal data. A simulation study and empirical application for determining normal ranges for physiological measurements for pre-term infants is used to demonstrate the performance of the proposed mixture model. The potential of the proposed model for overcoming the lack of consistency of likelihood based kernel bandwidth estimators when faced with heavy tailed distributions is also demonstrated.  相似文献   

5.
Recently, there has been a considerable interest in finite mixture models with semi-/non-parametric component distributions. Identifiability of such model parameters is generally not obvious, and when it occurs, inference methods are rather specific to the mixture model under consideration. Hence, a generalization of the EM algorithm to semiparametric mixture models is proposed. The approach is methodological and can be applied to a wide class of semiparametric mixture models. The behavior of the proposed EM type estimators is studied numerically not only through several Monte-Carlo experiments but also through comparison with alternative methods existing in the literature. In addition to these numerical experiments, applications to real data are provided, showing that the estimation method behaves well, that it is fast and easy to be implemented.  相似文献   

6.
The new concept and method of imposing imprecise (fuzzy) input and output data upon the conventional linear regression model is proposed in this paper. We introduce the fuzzy scalar (inner) product to formulate the fuzzy linear regression model. In order to invoke the conventional approach of linear regression analysis for real-valued data, we transact the α-level linear regression models of the fuzzy linear regression model. We construct the membership functions of fuzzy least squares estimators via the form of “Resolution Identity” which is a well-known formula in fuzzy sets theory. In order to obtain the membership value of any given least squares estimate taken from the fuzzy least squares estimator, we transform the original problem into the optimization problems. We also provide two computational procedures to solve the optimization problems.  相似文献   

7.
提出了一种用于检测运动目标的非参数多模态背景模型。该模型采用分箱核密度估计算法从训练图像序列中得到背景的密度函数。分箱核密度估计算法利用基于网格数据重心的分箱规则,很好地提取了训练图像序列的关键信息,避免了采用全样本数据点的重复计算, 大大提高了运动目标检测算法的实时性。通过与全样本算法进行对比,发现该背景模型在运动目标检测中的有效性,可用于户外的实时交通监控系统。  相似文献   

8.
In this paper, we consider a semiparametric partially linear regression model where there are missing data in the response. We propose robust Fisher-consistent estimators for the regression parameter, for the regression function and for the marginal location parameter of the response variable. A robust cross-validation method is briefly discussed, although, from our numerical results, the marginal estimators seem not to be sensitive to the bandwidth parameter. Finally, a Monte Carlo study is carried out to compare the performance of the robust proposed estimators among themselves and also with the classical ones, for normal and contaminated samples, under different missing data models. An example based on a real data set is also discussed.  相似文献   

9.
Methods for estimating the parameters of the logistic regression model when the data are collected using a case-control (retrospective) scheme are compared. The regression coefficients are estimated by maximum likelihood methodology. This leaves the constant term parameter to be estimated. Four methods for estimating this parameter are proposed. The comparison of the four estimators is in two parts. First, they are compared for large samples. This is accomplished via the asymptotic distribution of the estimators. Second, the estimators are compared for small samples. This is conducted via stimulation using 11 logistic models. The estimation of the posterior probability of the response variable being a success (Px), as given by the logistic regression model, when the constant parameter is estimated by each of the four proposed methods is the main focus of this paper. A third concern is the comparison of the logistic discriminant procedures when each of the four methods of estimating the constant parameters is used. In addition, the linear discriminant function procedure is included. This comparison is executed only for small samples via simulation. It was found that when estimating Px, method 1 (which is essentially the MLE) minimizes the expected mean square error. The results were not as clear when the parameter of interest was the constant term itself. The results from the classification comparisons implied that when the logistic model contains mostly (or all) binary regression variables the logistic discriminant procedure using method 1 to estimate the constant term gives minimum expected error rate; otherwise the linear discriminant function gives minimum expected error rate. In the latter case the logistic discriminant procedure (method 1 estimator of the constant term) is approximately as good.  相似文献   

10.
The constrained estimation in Cox’s model for the right-censored survival data is studied and the asymptotic properties of the constrained estimators are derived by using the Lagrangian method based on Karush–Kuhn–Tucker conditions. A novel minorization–maximization (MM) algorithm is developed for calculating the maximum likelihood estimates of the regression coefficients subject to box or linear inequality restrictions in the proportional hazards model. The first M-step of the proposed MM algorithm is to construct a surrogate function with a diagonal Hessian matrix, which can be reached by utilizing the convexity of the exponential function and the negative logarithm function. The second M-step is to maximize the surrogate function with a diagonal Hessian matrix subject to box constraints, which is equivalent to separately maximizing several one-dimensional concave functions with a lower bound and an upper bound constraint, resulting in an explicit solution via a median function. The ascent property of the proposed MM algorithm under constraints is theoretically justified. Standard error estimation is also presented via a non-parametric bootstrap approach. Simulation studies are performed to compare the estimations with and without constraints. Two real data sets are used to illustrate the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号