首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper is concerned with the problem of constructing a good predictive distribution relative to the Kullback–Leibler information in a linear regression model. The problem is equivalent to the simultaneous estimation of regression coefficients and error variance in terms of a complicated risk, which yields a new challenging issue in a decision-theoretic framework. An estimator of the variance is incorporated here into a loss for estimating the regression coefficients. Several estimators of the variance and of the regression coefficients are proposed and shown to improve on usual benchmark estimators both analytically and numerically. Finally, the prediction problem of a distribution is noted to be related to an information criterion for model selection like the Akaike information criterion (AIC). Thus, several AIC variants are obtained based on proposed and improved estimators and are compared numerically with AIC as model selection procedures.  相似文献   

2.
In this paper, we investigate the effect of pre-smoothing on model selection. Christóbal et al 6 Christóbal Christóbal, J. A., Faraldo Roca, P. and González Manteiga, W. 1987. A class of linear regression parameter estimators constructed by nonparametric estimation. Ann. Statist.,, 15: 603609. [Crossref], [Web of Science ®] [Google Scholar] showed the beneficial effect of pre-smoothing on estimating the parameters in a linear regression model. Here, in a regression setting, we show that smoothing the response data prior to model selection by Akaike's information criterion can lead to an improved selection procedure. The bootstrap is used to control the magnitude of the random error structure in the smoothed data. The effect of pre-smoothing on model selection is shown in simulations. The method is illustrated in a variety of settings, including the selection of the best fractional polynomial in a generalized linear model.  相似文献   

3.
Model selection criteria are frequently developed by constructing estimators of discrepancy measures that assess the disparity between the 'true' model and a fitted approximating model. The Akaike information criterion (AIC) and its variants result from utilizing Kullback's directed divergence as the targeted discrepancy. The directed divergence is an asymmetric measure of separation between two statistical models, meaning that an alternative directed divergence can be obtained by reversing the roles of the two models in the definition of the measure. The sum of the two directed divergences is Kullback's symmetric divergence. In the framework of linear models, a comparison of the two directed divergences reveals an important distinction between the measures. When used to evaluate fitted approximating models that are improperly specified, the directed divergence which serves as the basis for AIC is more sensitive towards detecting overfitted models, whereas its counterpart is more sensitive towards detecting underfitted models. Since the symmetric divergence combines the information in both measures, it functions as a gauge of model disparity which is arguably more balanced than either of its individual components. With this motivation, the paper proposes a new class of criteria for linear model selection based on targeting the symmetric divergence. The criteria can be regarded as analogues of AIC and two of its variants: 'corrected' AIC or AICc and 'modified' AIC or MAIC. The paper examines the selection tendencies of the new criteria in a simulation study and the results indicate that they perform favourably when compared to their AIC analogues.  相似文献   

4.
The autoregressive (AR) model is a popular method for fitting and prediction in analyzing time-dependent data, where selecting an accurate model among considered orders is a crucial issue. Two commonly used selection criteria are the Akaike information criterion and the Bayesian information criterion. However, the two criteria are known to suffer potential problems regarding overfit and underfit, respectively. Therefore, using them would perform well in some situations, but poorly in others. In this paper, we propose a new criterion in terms of the prediction perspective based on the concept of generalized degrees of freedom for AR model selection. We derive an approximately unbiased estimator of mean-squared prediction errors based on a data perturbation technique for selecting the order parameter, where the estimation uncertainty involved in a modeling procedure is considered. Some numerical experiments are performed to illustrate the superiority of the proposed method over some commonly used order selection criteria. Finally, the methodology is applied to a real data example to predict the weekly rate of return on the stock price of Taiwan Semiconductor Manufacturing Company and the results indicate that the proposed method is satisfactory.  相似文献   

5.
To measure the distance between a robust function evaluated under the true regression model and under a fitted model, we propose generalized Kullback–Leibler information. Using this generalization we have developed three robust model selection criteria, AICR*, AICCR* and AICCR, that allow the selection of candidate models that not only fit the majority of the data but also take into account non-normally distributed errors. The AICR* and AICCR criteria can unify most existing Akaike information criteria; three examples of such unification are given. Simulation studies are presented to illustrate the relative performance of each criterion.  相似文献   

6.
A class of predictive densities is derived by weighting the observed samples in maximizing the log-likelihood function. This approach is effective in cases such as sample surveys or design of experiments, where the observed covariate follows a different distribution than that in the whole population. Under misspecification of the parametric model, the optimal choice of the weight function is asymptotically shown to be the ratio of the density function of the covariate in the population to that in the observations. This is the pseudo-maximum likelihood estimation of sample surveys. The optimality is defined by the expected Kullback–Leibler loss, and the optimal weight is obtained by considering the importance sampling identity. Under correct specification of the model, however, the ordinary maximum likelihood estimate (i.e. the uniform weight) is shown to be optimal asymptotically. For moderate sample size, the situation is in between the two extreme cases, and the weight function is selected by minimizing a variant of the information criterion derived as an estimate of the expected loss. The method is also applied to a weighted version of the Bayesian predictive density. Numerical examples as well as Monte-Carlo simulations are shown for polynomial regression. A connection with the robust parametric estimation is discussed.  相似文献   

7.
This paper is concerned with the problem of selecting variables in two-group discriminant analysis for high-dimensional data with fewer observations than the dimension. We consider a selection criterion based on approximately unbiased for AIC type of risk. When the dimension is large compared to the sample size, AIC type of risk cannot be defined. We propose AIC by replacing maximum likelihood estimator with ridge-type estimator. This idea follows Srivastava and Kubokawa (2008). It has been further extended by Yamamura et al. (2010). Simulation revealed that the proposed AIC performs well.  相似文献   

8.
Abstract. We propose a criterion for selecting a capture–recapture model for closed populations, which follows the basic idea of the focused information criterion (FIC) of Claeskens and Hjort. The proposed criterion aims at selecting the model which, among the available models, leads to the smallest mean‐squared error (MSE) of the resulting estimator of the population size and is based on an index which, up to a constant term, is equal to the asymptotic MSE of the estimator. Two alternative approaches to estimate this FIC index are proposed. We also deal with multimodel inference; in this case, the population size is estimated by using a weighted average of the estimates coming from different models, with weights chosen so as to minimize the MSE of the resulting estimator. The proposed model selection approach is compared with more common approaches through a series of simulations. It is also illustrated by an application based on a dataset coming from a live‐trapping experiment.  相似文献   

9.
In this paper, we develop Bayesian methodology and computational algorithms for variable subset selection in Cox proportional hazards models with missing covariate data. A new joint semi-conjugate prior for the piecewise exponential model is proposed in the presence of missing covariates and its properties are examined. The covariates are assumed to be missing at random (MAR). Under this new prior, a version of the Deviance Information Criterion (DIC) is proposed for Bayesian variable subset selection in the presence of missing covariates. Monte Carlo methods are developed for computing the DICs for all possible subset models in the model space. A Bone Marrow Transplant (BMT) dataset is used to illustrate the proposed methodology.  相似文献   

10.
In this paper we prove the consistency in probability of a class of generalized BIC criteria for model selection in non-linear regression, by using asymptotic results of Gallant. This extends a result obtained by Nishii for model selection in linear regression.  相似文献   

11.
Motivated by the papers of Woodward and Gray (1979) and Gray, Kelly and McIntire (1978) on the R and S array approach to ARMA modeling, the authors show that the R and S array algorithm is completely equivalent to Levinson recursion. Since entries in the R and S array can be computed by either algorithm, the equivalence provides greater insight into the R and S methodology as well as its links to Akaike's AIC or FPE. Numerical simulations serve to highlight the differences between the various approaches as well as illustrate the problems associated with exact methods. The K and S array approach is shown to be an effective procedure for determining ARMA model orders.  相似文献   

12.
Model selection problems arise while constructing unbiased or asymptotically unbiased estimators of measures known as discrepancies to find the best model. Most of the usual criteria are based on goodness-of-fit and parsimony. They aim to maximize a transformed version of likelihood. For linear regression models with normally distributed error, the situation is less clear when two models are equivalent: are they close to or far from the unknown true model? In this work, based on stochastic simulation and parametric simulation, we study the results of Vuong's test, Cox's test, Akaike's information criterion, Bayesian information criterion, Kullback information criterion and bias corrected Kullback information criterion and the ability of these tests to discriminate between non-nested linear models.  相似文献   

13.
Abstract. We consider a function defined as the pointwise minimization of a doubly index random process. We are interested in the weak convergence of the minimizer in the space of bounded functions. Such convergence results can be applied in the context of penalized M‐estimation, that is, when the random process to minimize is expressed as a goodness‐of‐fit term plus a penalty term multiplied by a penalty weight. This weight is called the regularization parameter and the minimizing function the regularization path. The regularization path can be seen as a collection of estimators indexed by the regularization parameter. We obtain a consistency result and a central limit theorem for the regularization path in a functional sense. Various examples are provided, including the ?1‐regularization path for general linear models, the ?1‐ or ?2‐regularization path of the least absolute deviation regression and the Akaike information criterion.  相似文献   

14.
Summary.  The method of Bayesian model selection for join point regression models is developed. Given a set of K +1 join point models M 0,  M 1, …,  M K with 0, 1, …,  K join points respec-tively, the posterior distributions of the parameters and competing models M k are computed by Markov chain Monte Carlo simulations. The Bayes information criterion BIC is used to select the model M k with the smallest value of BIC as the best model. Another approach based on the Bayes factor selects the model M k with the largest posterior probability as the best model when the prior distribution of M k is discrete uniform. Both methods are applied to analyse the observed US cancer incidence rates for some selected cancer sites. The graphs of the join point models fitted to the data are produced by using the methods proposed and compared with the method of Kim and co-workers that is based on a series of permutation tests. The analyses show that the Bayes factor is sensitive to the prior specification of the variance σ 2, and that the model which is selected by BIC fits the data as well as the model that is selected by the permutation test and has the advantage of producing the posterior distribution for the join points. The Bayesian join point model and model selection method that are presented here will be integrated in the National Cancer Institute's join point software ( http://www.srab.cancer.gov/joinpoint/ ) and will be available to the public.  相似文献   

15.
Varying-coefficient models (VCMs) are useful tools for analysing longitudinal data. They can effectively describe the relationship between predictors and responses repeatedly measured. VCMs estimated by regularization methods are strongly affected by values of regularization parameters, and therefore selecting these values is a crucial issue. In order to choose these parameters objectively, we derive model selection criteria for evaluating VCMs from the viewpoints of information-theoretic and Bayesian approach. Models are estimated by the method of regularization with basis expansions, and then they are evaluated by model selection criteria. We demonstrate the effectiveness of the proposed criteria through Monte Carlo simulations and real data analysis.  相似文献   

16.
In its application to variable selection in the linear model, cross-validation is traditionally applied to an individual model contained in a set of potential models. Each model in the set is cross-validated independently of the rest and the model with the smallest cross-validated sum of squares is selected. In such settings, an efficient algorithm for cross-validation must be able to add and to delete single points quickly from a mixed model. Recent work in variable selection has applied cross-validation to an entire process of variable selection, such as Backward Elimination or Stepwise regression (Thall, Simon and Grier, 1992). The cross-validated version of Backward Elimination, for example, divides the data into an estimation and validation set and performs a complete Backward Elimination on the estimation set, while computing the cross-validated sum of squares at each step with the validation set. After doing this process once, a different validation set is selected and the process is repeated. The final model selection is based on the cross-validated sum of squares for all Backward Eliminations. An optimal algorithm for this application of cross-validation need not be efficient in adding and deleting observations from a single model but must be efficient in computing the cross-validation sum of squares from a series of models using a common validation set. This paper explores such an algorithm based on the sweep operator.  相似文献   

17.
ABSTRACT

M-estimation is a widely used technique for robust statistical inference. In this paper, we study robust partially functional linear regression model in which a scale response variable is explained by a function-valued variable and a finite number of real-valued variables. For the estimation of the regression parameters, which include the infinite dimensional function as well as the slope parameters for the real-valued variables, we use polynomial splines to approximate the slop parameter. The estimation procedure is easy to implement, and it is resistant to heavy-tailederrors or outliers in the response. The asymptotic properties of the proposed estimators are established. Finally, we assess the finite sample performance of the proposed method by Monte Carlo simulation studies.  相似文献   

18.
Competing models arise naturally in many research fields, such as survival analysis and economics, when the same phenomenon of interest is explained by different researcher using different theories or according to different experiences. The model selection problem is therefore remarkably important because of its great importance to the subsequent inference; Inference under a misspecified or inappropriate model will be risky. Existing model selection tests such as Vuong's tests [26 Q.H. Vuong, Likelihood ratio test for model selection and non-nested hypothesis, Econometrica 57 (1989), pp. 307333. doi: 10.2307/1912557[Crossref], [Web of Science ®] [Google Scholar]] and Shi's non-degenerate tests [21 X. Shi, A non-degenerate Vuong test, Quant. Econ. 6 (2015), pp. 85121. doi: 10.3982/QE382[Crossref], [Web of Science ®] [Google Scholar]] suffer from the variance estimation and the departure of the normality of the likelihood ratios. To circumvent these dilemmas, we propose in this paper an empirical likelihood ratio (ELR) tests for model selection. Following Shi [21 X. Shi, A non-degenerate Vuong test, Quant. Econ. 6 (2015), pp. 85121. doi: 10.3982/QE382[Crossref], [Web of Science ®] [Google Scholar]], a bias correction method is proposed for the ELR tests to enhance its performance. A simulation study and a real-data analysis are provided to illustrate the performance of the proposed ELR tests.  相似文献   

19.
ABSTRACT

This article addresses the problem of parameter estimation of the logistic regression model under subspace information via linear shrinkage, pretest, and shrinkage pretest estimators along with the traditional unrestricted maximum likelihood estimator and restricted estimator. We developed an asymptotic theory for the linear shrinkage and pretest estimators and compared their relative performance using the notion of asymptotic distributional bias and asymptotic quadratic risk. The analytical results demonstrated that the proposed estimation strategies outperformed the classical estimation strategies in a meaningful parameter space. Detailed Monte-Carlo simulation studies were conducted for different combinations and the performance of each estimation method was evaluated in terms of simulated relative efficiency. The results of the simulation study were in strong agreement with the asymptotic analytical findings. Two real-data examples are also given to appraise the performance of the estimators.  相似文献   

20.
J. Gladitz  J. Pilz 《Statistics》2013,47(4):491-506
We deal with experimental designs minimizing the mean square error of the linear BAYES estimator for the parameter vector of a multiple linear regression model where the experimental region is the k-dimensional unit sphere. After computing the uniquely determined optimum information matrix, we construct, separately for the homogeneous and the inhomogeneous model, both approximate and exact designs having such an information matrix.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号

京公网安备 11010802026262号