首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The present article deals with the problem of misspecifying the disturbance-covariance matrix as scalar, when it is locally non scalar. We consider a family of shrinkage estimators based on OLS estimator and compare its asymptotic properties with the properties of OLS estimator. We proposed a similar family of estimators based on FGLS and compared its asymptotic properties with the shrinkage estimator based on OLS under a Pitman's drift process. The effect of misspecifying the disturbances covariance matrix was analyzed with the help of a numerical simulation.  相似文献   

2.
The paper first shows that the stationary normal AR(1) process (SNAR1), the most frequently used process for generating exogenous variables in econometric Monte Carlo studies, cannot generate realistic exogenous variables, which are generally trended and similar to those generated by ARIMA (p,d,q) process withd≧1 and positive drift (trend). Then, it illustrates that in the context of AR(1) disturbances,trends in exogenous variables can frequently alter the very ranking of two competing estimators, the ordinary least squares estimator (OLS) and the Cochrane-Orcutt estimators (CO). For three common econometric models—a standard regression model, a dynamic model (i.e., a model with a lagged dependent variable), and a seemingly unrelated regression model, OLS becomes superior in many cases. This is so in spite of the fact that the CO estimator in the study utilizes the true value of the first-order autocorrelation coefficient of the disturbances. The message to be derived from these findings should be ccear. If one accepts the fact that most if not all economic time series are trended, and endorses a proposition that the fundamental if not sole purpose of Monte Carlo studies in econometrics should be to provide useful guidelines to practicing econometricians, then, he must not employ SNARl (nor anyother artificially created nontrended series) as a generator of exogenous variables in a Monte Carlo study, at least in the econometrics of autocorrelated disturbances. Alternative methods of generating stochastic exogenous variables that are trended are suggested in the paper. For almost four decades, the principle of the autoregressive transformation of a regression model with first-order autocorrelated disturbances (the Coestimation priciple) has been taken for granted as a method of correcting for the autocorrelation in the disturbances—be it in the two-stage Cochrane—Orcutt estimator, the iterative Cochrane-Orcutt estimator, or an estimator utilizing nonlinear techniques or search procedures. (Comitting the first observation due to transformation is not considered very crucial in general.) The results of the pertinent Monte Carlo studies appear to justify such a procedure only because most studies have employed SNARl exogenous variables, not trended ones. Thus, Monte Carlo experimenters must be blamed, at least partially, for this prevailining malpractice. It is hoped that they will not commit additional sins by not using realistic data in their future experiments.  相似文献   

3.
Double censoring often occurs in registry studies when left censoring is present in addition to right censoring. In this work, we examine estimation of Aalen's nonparametric regression coefficients based on doubly censored data. We propose two estimation techniques. The first type of estimators, including ordinary least squared (OLS) estimator and weighted least squared (WLS) estimators, are obtained using martingale arguments. The second type of estimator, the maximum likelihood estimator (MLE), is obtained via expectation-maximization (EM) algorithms that treat the survival times of left censored observations as missing. Asymptotic properties, including the uniform consistency and weak convergence, are established for the MLE. Simulation results demonstrate that the MLE is more efficient than the OLS and WLS estimators.  相似文献   

4.
Eva Fišerová 《Statistics》2013,47(3):241-251
We consider an unbiased estimator of a function of mean value parameters, which is not efficient. This inefficient estimator is correlated with a residual vector. Thus, if a unit dispersion is unknown, it is impossible to determine the correct confidence region for a function of mean value parameters via a standard estimator of an unknown dispersion with the exception of the case when the ordinary least squares (OLS) estimator is considered in a model with a special covariance structure such that the OLS and the generalized least squares (GLS) estimator are the same, that is the OLS estimator is efficient. Two different estimators of a unit dispersion independent of an inefficient estimator are derived in a singular linear statistical model. Their quality was verified by simulations for several types of experimental designs. Two new estimators of the unit dispersion were compared with the standard estimators based on the GLS and the OLS estimators of the function of the mean value parameters. The OLS estimator was considered in the incorrect model with a different covariance matrix such that the originally inefficient estimator became efficient. The numerical examples led to a slightly surprising result which seems to be due to data behaviour. An example from geodetic practice is presented in the paper.  相似文献   

5.
Abstract. We investigate non‐parametric estimation of a monotone baseline hazard and a decreasing baseline density within the Cox model. Two estimators of a non‐decreasing baseline hazard function are proposed. We derive the non‐parametric maximum likelihood estimator and consider a Grenander type estimator, defined as the left‐hand slope of the greatest convex minorant of the Breslow estimator. We demonstrate that the two estimators are strongly consistent and asymptotically equivalent and derive their common limit distribution at a fixed point. Both estimators of a non‐increasing baseline hazard and their asymptotic properties are obtained in a similar manner. Furthermore, we introduce a Grenander type estimator for a non‐increasing baseline density, defined as the left‐hand slope of the least concave majorant of an estimator of the baseline cumulative distribution function, derived from the Breslow estimator. We show that this estimator is strongly consistent and derive its asymptotic distribution at a fixed point.  相似文献   

6.
Abstract. In numerous applications data are observed at random times and an estimated graph of the spectral density may be relevant for characterizing and explaining phenomena. By using a wavelet analysis, one derives a non‐parametric estimator of the spectral density of a Gaussian process with stationary increments (or a stationary Gaussian process) from the observation of one path at random discrete times. For every positive frequency, this estimator is proved to satisfy a central limit theorem with a convergence rate depending on the roughness of the process and the moment of random durations between successive observations. In the case of stationary Gaussian processes, one can compare this estimator with estimators based on the empirical periodogram. Both estimators reach the same optimal rate of convergence, but the estimator based on wavelet analysis converges for a different class of random times. Simulation examples and an application to biological data are also provided.  相似文献   

7.
This paper dwells on the choice between the ordinary least squares and the estimated generalized least squares estimators when the presence of heteroskedasticity is suspected. Since the estimated generalized least squares estimator does not dominate the ordinary least squares estimator completely over the whole parameter space, it is of interest to the researcher to know in advance whether the degree of severity of heteroskedasticity is such that OLS estimator outperforms the estimated generalized least squares (or 2SAE). Casting the problem in the non-spherical error mold and exploiting the principle underlying the Bayesian pretest estimator, an intuitive non-mathematical procedure is proposed to serve as an aid to the researcher in deciding when to use either the ordinary least squares (OLS) or the estimated generalized least squares (2SAE) estimators.  相似文献   

8.
Abstract. In this paper, two non‐parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel‐based approaches. The second estimator involves sequential fitting by univariate local polynomial quantile regressions for each additive component with the other additive components replaced by the corresponding estimates from the first estimator. The purpose of the extra local averaging is to reduce the variance of the first estimator. We show that the second estimator achieves oracle efficiency in the sense that each estimated additive component has the same variance as in the case when all other additive components were known. Asymptotic properties are derived for both estimators under dependent processes that are strictly stationary and absolutely regular. We also provide a demonstrative empirical application of additive quantile models to ambulance travel times.  相似文献   

9.
Most of the long memory estimators for stationary fractionally integrated time series models are known to experience non‐negligible bias in small and finite samples. Simple moment estimators are also vulnerable to such bias, but can easily be corrected. In this article, the authors propose bias reduction methods for a lag‐one sample autocorrelation‐based moment estimator. In order to reduce the bias of the moment estimator, the authors explicitly obtain the exact bias of lag‐one sample autocorrelation up to the order n−1. An example where the exact first‐order bias can be noticeably more accurate than its asymptotic counterpart, even for large samples, is presented. The authors show via a simulation study that the proposed methods are promising and effective in reducing the bias of the moment estimator with minimal variance inflation. The proposed methods are applied to the northern hemisphere data. The Canadian Journal of Statistics 37: 476–493; 2009 © 2009 Statistical Society of Canada  相似文献   

10.
Recently, Shabbir and Gupta [Shabbir, J. and Gupta, S. (2011). On estimating finite population mean in simple and stratified random sampling. Communications in Statistics-Theory and Methods, 40(2), 199–212] defined a class of ratio type exponential estimators of population mean under a very specific linear transformation of auxiliary variable. In the present article, we propose a generalized class of ratio type exponential estimators of population mean in simple random sampling under a very general linear transformation of auxiliary variable. Shabbir and Gupta's [Shabbir, J. and Gupta, S. (2011). On estimating finite population mean in simple and stratified random sampling. Communications in Statistics-Theory and Methods, 40(2), 199–212] class of estimators is a particular member of our proposed class of estimators. It has been found that the optimal estimator of our proposed generalized class of estimators is always more efficient than almost all the existing estimators defined under the same situations. Moreover, in comparison to a few existing estimators, our proposed estimator becomes more efficient under some simple conditions. Theoretical results obtained in the article have been verified by taking a numerical illustration. Finally, a simulation study has been carried out to see the relative performance of our proposed estimator with respect to some existing estimators which are less efficient under certain conditions as compared to the proposed estimator.  相似文献   

11.
Abstract. The Buckley–James estimator (BJE) is a well‐known estimator for linear regression models with censored data. Ritov has generalized the BJE to a semiparametric setting and demonstrated that his class of Buckley–James type estimators is asymptotically equivalent to the class of rank‐based estimators proposed by Tsiatis. In this article, we revisit such relationship in censored data with covariates missing by design. By exploring a similar relationship between our proposed class of Buckley–James type estimating functions to the class of rank‐based estimating functions recently generalized by Nan, Kalbfleisch and Yu, we establish asymptotic properties of our proposed estimators. We also conduct numerical studies to compare asymptotic efficiencies from various estimators.  相似文献   

12.
This paper considers estimating the model coefficients when the observed periodic autoregressive time series is contaminated by a trend. The proposed Yule–Walker estimators are obtained by a two-step procedure. In the first step, the trend is estimated by a weighted local polynomial, and the residuals are obtained by subtracting the trend estimates from the observations; in the second step, the model coefficients are estimated by the well-known Yule–Walker method via the residuals. It is shown that under certain conditions such Yule–Walker estimators are oracally efficient, i.e., they are asymptotically equivalent to those obtained from periodic autoregressive time series without a trend. An easy-to-use implementation procedure is provided. The performance of the estimators is illustrated by simulation studies and real data analysis. In particular, the simulation studies show that the proposed estimator outperforms that obtained from the residuals when the trend is estimated by kernel smoothing without taking the heteroscedasticity into consideration.  相似文献   

13.
ABSTRACT

Advances in statistical computing software have led to a substantial increase in the use of ordinary least squares (OLS) regression models in the engineering and applied statistics communities. Empirical evidence suggests that data sets can routinely have 10% or more outliers in many processes. Unfortunately, these outliers typically will render the OLS parameter estimates useless. The OLS diagnostic quantities and graphical plots can reliably identify a few outliers; however, they significantly lose power with increasing dimension and number of outliers. Although there have been recent advances in the methods that detect multiple outliers, improvements are needed in regression estimators that can fit well in the presence of outliers. We introduce a robust regression estimator that performs well regardless of outlier quantity and configuration. Our studies show that the best available estimators are vulnerable when the outliers are extreme in the regressor space (high leverage). Our proposed compound estimator modifies recently published methods with an improved initial estimate and measure of leverage. Extensive performance evaluations indicate that the proposed estimator performs the best and consistently fits the bulk of the data when outliers are present. The estimator, implemented in standard software, provides researchers and practitioners a tool for the model-building process to protect against the severe impact from multiple outliers.  相似文献   

14.
This paper deals with the problem of multicollinearity in a multiple linear regression model with linear equality restrictions. The restricted two parameter estimator which was proposed in case of multicollinearity satisfies the restrictions. The performance of the restricted two parameter estimator over the restricted least squares (RLS) estimator and the ordinary least squares (OLS) estimator is examined under the mean square error (MSE) matrix criterion when the restrictions are correct and not correct. The necessary and sufficient conditions for the restricted ridge regression, restricted Liu and restricted shrunken estimators, which are the special cases of the restricted two parameter estimator, to have a smaller MSE matrix than the RLS and the OLS estimators are derived when the restrictions hold true and do not hold true. Theoretical results are illustrated with numerical examples based on Webster, Gunst and Mason data and Gorman and Toman data. We conduct a final demonstration of the performance of the estimators by running a Monte Carlo simulation which shows that when the variance of the error term and the correlation between the explanatory variables are large, the restricted two parameter estimator performs better than the RLS estimator and the OLS estimator under the configurations examined.  相似文献   

15.
Central limit theorems play an important role in the study of statistical inference for stochastic processes. However, when the non‐parametric local polynomial threshold estimator, especially local linear case, is employed to estimate the diffusion coefficients of diffusion processes, the adaptive and predictable structure of the estimator conditionally on the σ ‐field generated by diffusion processes is destroyed, so the classical central limit theorem for martingale difference sequences cannot work. In high‐frequency data, we proved the central limit theorems of local polynomial threshold estimators for the volatility function in diffusion processes with jumps by Jacod's stable convergence theorem. We believe that our proof procedure for local polynomial threshold estimators provides a new method in this field, especially in the local linear case.  相似文献   

16.
ABSTRACT

In this paper, assuming that there exist omitted variables in the specified model, we analytically derive the exact formula for the mean squared error (MSE) of a heterogeneous pre-test (HPT) estimator whose components are the ordinary least squares (OLS) and feasible ridge regression (FRR) estimators. Since we cannot examine the MSE performance analytically, we execute numerical evaluations to investigate small sample properties of the HPT estimator, and compare the MSE performance of the HPT estimator with those of the FRR estimator and the usual OLS estimator. Our numerical results show that (1) the HPT estimator is more efficient when the model misspecification is severe; (2) the HPT estimator with the optimal critical value obtained under the correctly specified model can be safely used even when there exist omitted variables in the specified model.  相似文献   

17.
Whenever there is auxiliary information available in any form, the researchers want to utilize it in the method of estimation to obtain the most efficient estimator. When there exists enough amount of correlation between the study and the auxiliary variables, and parallel to these associations, the ranks of the auxiliary variables are also correlated with the study variable, which can be used a valuable device for enhancing the precision of an estimator accordingly. This article addresses the problem of estimating the finite population mean that utilizes the complementary information in the presence of (i) the auxiliary variable and (ii) the ranks of the auxiliary variable for non response. We suggest an improved estimator for estimating the finite population mean using the auxiliary information in the presence of non response. Expressions for bias and mean squared error of considered estimators are derived up to the first order of approximation. The performance of estimators is compared theoretically and numerically. A numerical study is carried out to evaluate the performances of estimators. It is observed that the proposed estimator is more efficient than the usual sample mean and the regression estimators, and some other families of ratio and exponential type of estimators.  相似文献   

18.
叶宗裕 《统计研究》2008,25(6):102-104
本文运用随机模拟方法,对误差序列异方差模型中加权最小二乘(GLS)估计的有效性进行研究。研究表明,GLS估计的有效性与异方差强度有关,当异方差强度较强时,GLS估计比普通最小二乘(OLS)估计有效;当异方差强度较弱时,GLS估计不如OLS估计有效。  相似文献   

19.
In the presence of multicollinearity, the rk class estimator is proposed as an alternative to the ordinary least squares (OLS) estimator which is a general estimator including the ordinary ridge regression (ORR), the principal components regression (PCR) and the OLS estimators. Comparison of competing estimators of a parameter in the sense of mean square error (MSE) criterion is of central interest. An alternative criterion to the MSE criterion is the Pitman’s (1937) closeness (PC) criterion. In this paper, we compare the rk class estimator to the OLS estimator in terms of PC criterion so that we can get the comparison of the ORR estimator to the OLS estimator under the PC criterion which was done by Mason et al. (1990) and also the comparison of the PCR estimator to the OLS estimator by means of the PC criterion which was done by Lin and Wei (2002).  相似文献   

20.
It is common for linear regression models that the error variances are not the same for all observations and there are some high leverage data points. In such situations, the available literature advocates the use of heteroscedasticity consistent covariance matrix estimators (HCCME) for the testing of regression coefficients. Primarily, such estimators are based on the residuals derived from the ordinary least squares (OLS) estimator that itself can be seriously inefficient in the presence of heteroscedasticity. To get efficient estimation, many efficient estimators, namely the adaptive estimators are available but their performance has not been evaluated yet when the problem of heteroscedasticity is accompanied with the presence of high leverage data. In this article, the presence of high leverage data is taken into account to evaluate the performance of the adaptive estimator in terms of efficiency. Furthermore, our numerical work also evaluates the performance of the robust standard errors based on this efficient estimator in terms of interval estimation and null rejection rate (NRR).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号