首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
ABSTRACT

Nowadays, generalized linear models have many applications. Some of these models which have more applications in the real world are the models with random effects; that is, some of the unknown parameters are considered random variables. In this article, this situation is considered in logistic regression models with a random intercept having exponential distribution. The aim is to obtain the Bayesian D-optimal design; thus, the method is to maximize the Bayesian D-optimal criterion. For the model was considered here, this criterion is a function of the quasi-information matrix that depends on the unknown parameters of the model. In the Bayesian D-optimal criterion, the expectation is acquired in respect of the prior distributions that are considered for the unknown parameters. Thus, it will only be a function of experimental settings (support points) and their weights. The prior distribution of the fixed parameters is considered uniform and normal. The Bayesian D-optimal design is finally calculated numerically by R3.1.1 software.  相似文献   

2.
3.
In the presence of multicollinearity, the rk class estimator is proposed as an alternative to the ordinary least squares (OLS) estimator which is a general estimator including the ordinary ridge regression (ORR), the principal components regression (PCR) and the OLS estimators. Comparison of competing estimators of a parameter in the sense of mean square error (MSE) criterion is of central interest. An alternative criterion to the MSE criterion is the Pitman’s (1937) closeness (PC) criterion. In this paper, we compare the rk class estimator to the OLS estimator in terms of PC criterion so that we can get the comparison of the ORR estimator to the OLS estimator under the PC criterion which was done by Mason et al. (1990) and also the comparison of the PCR estimator to the OLS estimator by means of the PC criterion which was done by Lin and Wei (2002).  相似文献   

4.
Existing projection designs (e.g. maximum projection designs) attempt to achieve good space-filling properties in all projections. However, when using a Gaussian process (GP), model-based design criteria such as the entropy criterion is more appropriate. We employ the entropy criterion averaged over a set of projections, called expected entropy criterion (EEC), to generate projection designs. We show that maximum EEC designs are invariant to monotonic transformations of the response, i.e. they are optimal for a wide class of stochastic process models. We also demonstrate that transformation of each column of a Latin hypercube design (LHD) based on a monotonic function can substantially improve the EEC. Two types of input transformations are considered: a quantile function of a symmetric Beta distribution chosen to optimize the EEC, and a nonparametric transformation corresponding to the quantile function of a symmetric density chosen to optimize the EEC. Numerical studies show that the proposed transformations of the LHD are efficient and effective for building robust maximum EEC designs. These designs give projections with markedly higher entropies and lower maximum prediction variances (MPV''s) at the cost of small increases in average prediction variances (APV''s) compared to state-of-the-art space-filling designs over wide ranges of covariance parameter values.  相似文献   

5.
The least squares estimator is usually applied when estimating the parameters in linear regression models. As this estimator is sensitive to departures from normality in the residual distribution, several alternatives have been proposed. The Lp norm estimators is one class of such alternatives. It has been proposed that the kurtosis of the residual distribution be taken into account when a choice of estimator in the Lp norm class is made (i.e. the choice of p). In this paper, the asymtotic variance of the estimators is used as the criterion in the choice of p. It is shown that when this criterion is applied, other characteristics of the residual distribution than the kurtosis (namely moments of order p-2 and 2p-2) are important.  相似文献   

6.
Point process models are a natural approach for modelling data that arise as point events. In the case of Poisson counts, these may be fitted easily as a weighted Poisson regression. Point processes lack the notion of sample size. This is problematic for model selection, because various classical criteria such as the Bayesian information criterion (BIC) are a function of the sample size, n, and are derived in an asymptotic framework where n tends to infinity. In this paper, we develop an asymptotic result for Poisson point process models in which the observed number of point events, m, plays the role that sample size does in the classical regression context. Following from this result, we derive a version of BIC for point process models, and when fitted via penalised likelihood, conditions for the LASSO penalty that ensure consistency in estimation and the oracle property. We discuss challenges extending these results to the wider class of Gibbs models, of which the Poisson point process model is a special case.  相似文献   

7.
We find optimal designs for linear models using a novel algorithm that iteratively combines a semidefinite programming (SDP) approach with adaptive grid techniques. The proposed algorithm is also adapted to find locally optimal designs for nonlinear models. The search space is first discretized, and SDP is applied to find the optimal design based on the initial grid. The points in the next grid set are points that maximize the dispersion function of the SDP-generated optimal design using nonlinear programming. The procedure is repeated until a user-specified stopping rule is reached. The proposed algorithm is broadly applicable, and we demonstrate its flexibility using (i) models with one or more variables and (ii) differentiable design criteria, such as A-, D-optimality, and non-differentiable criterion like E-optimality, including the mathematically more challenging case when the minimum eigenvalue of the information matrix of the optimal design has geometric multiplicity larger than 1. Our algorithm is computationally efficient because it is based on mathematical programming tools and so optimality is assured at each stage; it also exploits the convexity of the problems whenever possible. Using several linear and nonlinear models with one or more factors, we show the proposed algorithm can efficiently find optimal designs.  相似文献   

8.
We begin by recalling the tripartite division of statistical problems into three classes, M-closed, M-complete, and M-open and then reviewing the key ideas of introductory Shannon theory. Focusing on the related but distinct goals of model selection and prediction, we argue that different techniques for these two goals are appropriate for the three different problem classes. For M-closed problems we give relative entropy justification that the Bayes information criterion (BIC) is appropriate for model selection and that the Bayes model average is information optimal for prediction. For M-complete problems, we discuss the principle of maximum entropy and a way to use the rate distortion function to bypass the inaccessibility of the true distribution. For prediction in the M-complete class, there is little work done on information based model averaging so we discuss the Akaike information criterion (AIC) and its properties and variants.

For the M-open class, we argue that essentially only predictive criteria are suitable. Thus, as an analog to model selection, we present the key ideas of prediction along a string under a codelength criterion and propose a general form of this criterion. Since little work appears to have been done on information methods for general prediction in the M-open class of problems, we mention the field of information theoretic learning in certain general function spaces.  相似文献   

9.
10.
This paper brings together two topics in the estimation of time series forecasting models: the use of the multistep-ahead error sum of squares as a criterion to be minimized and frequency domain methods for carrying out this minimization. The methods are developed for the wide class of time series models having a spectrum which is linear in unknown coefficients. This includes the IMA(1, 1) model for which the common exponentially weigh-ted moving average predictor is optimal, besides more general structural models for series exhibiting trends and seasonality. The method is extended to include the Box–Jenkins `air line' model. The value of the multistep criterion is that it provides protection against using an incorrectly specified model. The value of frequency domain estimation is that the iteratively reweighted least squares scheme for fitting generalized linear models is readily extended to construct the parameter estimates and their standard errors. It also yields insight into the loss of efficiency when the model is correct and the robustness of the criterion against an incorrect model. A simple example is used to illustrate the method, and a real example demonstrates the extension to seasonal models. The discussion considers a diagnostic test statistic for indicating an incorrect model.  相似文献   

11.
Completely random measures (CRM) represent the key building block of a wide variety of popular stochastic models and play a pivotal role in modern Bayesian Nonparametrics. The popular Ferguson & Klass representation of CRMs as a random series with decreasing jumps can immediately be turned into an algorithm for sampling realizations of CRMs or more elaborate models involving transformed CRMs. However, concrete implementation requires to truncate the random series at some threshold resulting in an approximation error. The goal of this paper is to quantify the quality of the approximation by a moment-matching criterion, which consists in evaluating a measure of discrepancy between actual moments and moments based on the simulation output. Seen as a function of the truncation level, the methodology can be used to determine the truncation level needed to reach a certain level of precision. The resulting moment-matching Ferguson & Klass algorithm is then implemented and illustrated on several popular Bayesian nonparametric models.  相似文献   

12.
13.
In this paper, we consider the problem of model robust design for simultaneous parameter estimation among a class of polynomial regression models with degree up to k. A generalized D-optimality criterion, the Ψα‐optimality criterion, first introduced by Läuter (1974) is considered for this problem. By applying the theory of canonical moments and the technique of maximin principle, we derive a model robust optimal design in the sense of having highest minimum Ψα‐efficiency. Numerical comparison indicates that the proposed design has remarkable performance for parameter estimation in all of the considered rival models.  相似文献   

14.
Spatial modeling is important in many fields and there are various kinds of spatial models. One of such models is known as the fractionally integrated separable spatial ARMA (FISSARMA) model. In the area of time series analysis, Sowell (1992 Sowell, F. (1992). Maximum likelihood estimation of stationary univariate fractionally integrated time series models. J. Econ. 53:165188.[Crossref], [Web of Science ®] [Google Scholar]) has established the autocovariance function of the long-memory models using hypergeometric function. In this paper we will extend Sowell’s work for FISSARMA models.  相似文献   

15.
This paper studies the optimal experimental design problem to discriminate two regression models. Recently, López-Fidalgo et al. [2007. An optimal experimental design criterion for discriminating between non-normal models. J. Roy. Statist. Soc. B 69, 231–242] extended the conventional T-optimality criterion by Atkinson and Fedorov [1975a. The designs of experiments for discriminating between two rival models. Biometrika 62, 57–70; 1975b. Optimal design: experiments for discriminating between several models. Biometrika 62, 289–303] to deal with non-normal parametric regression models, and proposed a new optimal experimental design criterion based on the Kullback–Leibler information divergence. In this paper, we extend their parametric optimality criterion to a semiparametric setup, where we only need to specify some moment conditions for the null or alternative regression model. Our criteria, called the semiparametric Kullback–Leibler optimality criteria, can be implemented by applying a convex duality result of partially finite convex programming. The proposed method is illustrated by a simple numerical example.  相似文献   

16.
The concept of sloperotaiability with equal maximum directional vari ance for second order response surface models is introduced as a new design property. This requires that the maximum variance of the estimated slope over all possible directions be only a function of p, which is the distance from the design originif is shown that a rotatable design satisfies this property Also, minimization of tiie maximum variance of the estimated slope over all possible directions is proposed as a new design optirnality criterion, and op¬timal designs are called slope-directional minirnax designs. For the class of cquiradial designs, the slope-directional minirnax designs are compared with D— optimal designs.  相似文献   

17.
In this article we propose a modification of the recently introduced divergence information criterion (DIC, Mattheou et al., 2009 Mattheou , K. , Lee , S. , Karagrigoriou , A. ( 2009 ). A model selection criterion based on the BHHJ measure of divergence . Journal of Statistical Planning and Inference 139 : 128135 .[Crossref], [Web of Science ®] [Google Scholar]) for the determination of the order of an autoregressive process and show that it is an asymptotically unbiased estimator of the expected overall discrepancy, a nonnegative quantity that measures the distance between the true unknown model and a fitted approximating model. Further, we use Monte Carlo methods and various data generating processes for small, medium, and large sample sizes in order to explore the capabilities of the new criterion in selecting the optimal order in autoregressive processes and in general in a time series context. The new criterion shows remarkably good results by choosing the correct model more frequently than traditional information criteria.  相似文献   

18.
For a general mixed model with two variance components θ1 and θ2, a criterion for a function q1θ1+q2θ2 to admit an unbiased nonnegative definite quadratic estimator is established in a form that allows answering the question of existence of such an estimator more explicitly than with the use of the criteria known hitherto. An application of this result to the case of a random one-way model shows that for many unbalanced models the estimability criterion is expressible directly by the largest of the numbers of observations within levels, thus extending the criterion established by LaMotte (1973) for balanced models.  相似文献   

19.
20.
ABSTRACT

The shared frailty models are often used to model heterogeneity in survival analysis. The most common shared frailty model is a model in which hazard function is a product of a random factor (frailty) and the baseline hazard function which is common to all individuals. There are certain assumptions about the baseline distribution and the distribution of frailty. In this paper, we consider inverse Gaussian distribution as frailty distribution and three different baseline distributions, namely the generalized Rayleigh, the weighted exponential, and the extended Weibull distributions. With these three baseline distributions, we propose three different inverse Gaussian shared frailty models. We also compare these models with the models where the above-mentioned distributions are considered without frailty. We develop the Bayesian estimation procedure using Markov Chain Monte Carlo (MCMC) technique to estimate the parameters involved in these models. We present a simulation study to compare the true values of the parameters with the estimated values. A search of the literature suggests that currently no work has been done for these three baseline distributions with a shared inverse Gaussian frailty so far. We also apply these three models by using a real-life bivariate survival data set of McGilchrist and Aisbett (1991 McGilchrist, C.A., Aisbett, C.W. (1991). Regression with frailty in survival analysis. Biometrics 47:461466.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) related to the kidney infection data and a better model is suggested for the data using the Bayesian model selection criteria.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号