首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 859 毫秒
1.
Modelling and forecasting time series sampled at different frequencies   总被引:1,自引:0,他引:1  
This paper discusses how to specify an observable high‐frequency model for a vector of time series sampled at high and low frequencies. To this end we first study how aggregation over time affects both the dynamic components of a time series and their observability, in a multivariate linear framework. We find that the basic dynamic components remain unchanged but some of them, mainly those related to the seasonal structure, become unobservable. Building on these results, we propose a structured specification method built on the idea that the models relating the variables in high and low sampling frequencies should be mutually consistent. After specifying a consistent and observable high‐frequency model, standard state‐space techniques provide an adequate framework for estimation, diagnostic checking, data interpolation and forecasting. An example using national accounting data illustrates the practical application of this method. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
A procedure for estimating state space models for multivariate distributed lag processes is described. It involves singular value decomposition techniques and yields an internally balanced state space representation which has attractive properties. Following the specifications of a forecasting competition, the approach is applied to generate ex-post forecasts for US real GNP growth rates. The forecasts of the estimated state space model are compared to those of twelve econometric models and an ARIMA model.  相似文献   

4.
This paper addresses the issue of forecasting term structure. We provide a unified state‐space modeling framework that encompasses different existing discrete‐time yield curve models. Within such a framework we analyze the impact of two modeling choices, namely the imposition of no‐arbitrage restrictions and the size of the information set used to extract factors, on forecasting performance. Using US yield curve data, we find that both no‐arbitrage and large information sets help in forecasting but no model uniformly dominates the other. No‐arbitrage models are more useful at shorter horizons for shorter maturities. Large information sets are more useful at longer horizons and longer maturities. We also find evidence for a significant feedback from yield curve models to macroeconomic variables that could be exploited for macroeconomic forecasting. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
We use state space methods to estimate a large dynamic factor model for the Norwegian economy involving 93 variables for 1978Q2–2005Q4. The model is used to obtain forecasts for 22 key variables that can be derived from the original variables by aggregation. To investigate the potential gain in using such a large information set, we compare the forecasting properties of the dynamic factor model with those of univariate benchmark models. We find that there is an overall gain in using the dynamic factor model, but that the gain is notable only for a few of the key variables. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
In their seminal book Time Series Analysis: Forecasting and Control, Box and Jenkins (1976) introduce the Airline model, which is still routinely used for the modelling of economic seasonal time series. The Airline model is for a differenced time series (in levels and seasons) and constitutes a linear moving average of lagged Gaussian disturbances which depends on two coefficients and a fixed variance. In this paper a novel approach to seasonal adjustment is developed that is based on the Airline model and that accounts for outliers and breaks in time series. For this purpose we consider the canonical representation of the Airline model. It takes the model as a sum of trend, seasonal and irregular (unobserved) components which are uniquely identified as a result of the canonical decomposition. The resulting unobserved components time series model is extended by components that allow for outliers and breaks. When all components depend on Gaussian disturbances, the model can be cast in state space form and the Kalman filter can compute the exact log‐likelihood function. Related filtering and smoothing algorithms can be used to compute minimum mean squared error estimates of the unobserved components. However, the outlier and break components typically rely on heavy‐tailed densities such as the t or the mixture of normals. For this class of non‐Gaussian models, Monte Carlo simulation techniques will be used for estimation, signal extraction and seasonal adjustment. This robust approach to seasonal adjustment allows outliers to be accounted for, while keeping the underlying structures that are currently used to aid reporting of economic time series data. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

7.
This paper performs a large‐scale forecast evaluation exercise to assess the performance of different models for the short‐term forecasting of GDP, resorting to large datasets from ten European countries. Several versions of factor models are considered and cross‐country evidence is provided. The forecasting exercise is performed in a simulated real‐time context, which takes account of publication lags in the individual series. In general, we find that factor models perform best and models that exploit monthly information outperform models that use purely quarterly data. However, the improvement over the simpler, quarterly models remains contained. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
US inflation appears to undergo shifts in its mean level and variability. We evaluate the performance of three useful models for capturing such shifts. The models studied are the Markov switching models, state space models with heavy‐tailed errors, and state space models with compound error distributions. Our study shows that all three models have very similar performance when evaluated in terms of the mean squared or mean absolute forecast errors. However, the latter two models are considerably more parsimonious, and easily beat the more profligately parameterized Markov switching models in terms of model selection criteria, such as the AIC or the SBC. Thus, these may serve as useful continuous alternatives to the popular discrete Markov switching models for capturing shifts in time series. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

9.
This paper discusses the forecasting performance of alternative factor models based on a large panel of quarterly time series for the German economy. One model extracts factors by static principal components analysis; the second model is based on dynamic principal components obtained using frequency domain methods; the third model is based on subspace algorithms for state‐space models. Out‐of‐sample forecasts show that the forecast errors of the factor models are on average smaller than the errors of a simple autoregressive benchmark model. Among the factor models, the dynamic principal component model and the subspace factor model outperform the static factor model in most cases in terms of mean‐squared forecast error. However, the forecast performance depends crucially on the choice of appropriate information criteria for the auxiliary parameters of the models. In the case of misspecification, rankings of forecast performance can change severely. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

10.
Three general classes of state space models are presented, using the single source of error formulation. The first class is the standard linear model with homoscedastic errors, the second retains the linear structure but incorporates a dynamic form of heteroscedasticity, and the third allows for non‐linear structure in the observation equation as well as heteroscedasticity. These three classes provide stochastic models for a wide variety of exponential smoothing methods. We use these classes to provide exact analytic (matrix) expressions for forecast error variances that can be used to construct prediction intervals one or multiple steps ahead. These formulas are reduced to non‐matrix expressions for 15 state space models that underlie the most common exponential smoothing methods. We discuss relationships between our expressions and previous suggestions for finding forecast error variances and prediction intervals for exponential smoothing methods. Simpler approximations are developed for the more complex schemes and their validity examined. The paper concludes with a numerical example using a non‐linear model. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

11.
We perform Bayesian model averaging across different regressions selected from a set of predictors that includes lags of realized volatility, financial and macroeconomic variables. In our model average, we entertain different channels of instability by either incorporating breaks in the regression coefficients of each individual model within our model average, breaks in the conditional error variance, or both. Changes in these parameters are driven by mixture distributions for state innovations (MIA) of linear Gaussian state‐space models. This framework allows us to compare models that assume small and frequent as well as models that assume large but rare changes in the conditional mean and variance parameters. Results using S&P 500 monthly and quarterly realized volatility data from 1960 to 2014 suggest that Bayesian model averaging in combination with breaks in the regression coefficients and the error variance through MIA dynamics generates statistically significantly more accurate forecasts than the benchmark autoregressive model. However, compared to a MIA autoregression with breaks in the regression coefficients and the error variance, we fail to provide any drastic improvements.  相似文献   

12.
Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality‐forecasting models be associated with real‐world trends in health‐related variables? Does inclusion of health‐related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle‐related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
Data are now readily available for a very large number of macroeconomic variables that are potentially useful when forecasting. We argue that recent developments in the theory of dynamic factor models enable such large data sets to be summarized by relatively few estimated factors, which can then be used to improve forecast accuracy. In this paper we construct a large macroeconomic data set for the UK, with about 80 variables, model it using a dynamic factor model, and compare the resulting forecasts with those from a set of standard time‐series models. We find that just six factors are sufficient to explain 50% of the variability of all the variables in the data set. These factors, which can be shown to be related to key variables in the economy, and their use leads to considerable improvements upon standard time‐series benchmarks in terms of forecasting performance. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
This study empirically examines the role of macroeconomic and stock market variables in the dynamic Nelson–Siegel framework with the purpose of fitting and forecasting the term structure of interest rate on the Japanese government bond market. The Nelson–Siegel type models in state‐space framework considerably outperform the benchmark simple time series forecast models such as an AR(1) and a random walk. The yields‐macro model incorporating macroeconomic factors leads to a better in‐sample fit of the term structure than the yields‐only model. The out‐of‐sample predictability of the former for short‐horizon forecasts is superior to the latter for all maturities examined in this study, and for longer horizons the former is still compatible to the latter. Inclusion of macroeconomic factors can dramatically reduce the autocorrelation of forecast errors, which has been a common phenomenon of statistical analysis in previous term structure models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
This paper models bond term premia empirically in terms of the maturity composition of the federal debt and other observable economic variables in a time‐varying framework with potential regime shifts. We present regression and out‐of sample forecasting results demonstrating that information on the age composition of the Federal debt is useful for forecasting term premia. We show that the multiprocess mixture model, a multi‐state time‐varying parameter model, outperforms the commonly used GARCH model in out‐of‐sample forecasts of term premia. The results underscore the importance of modelling term premia, as a function of economic variables rather than just as a function of asset covariances as in the conditional heteroscedasticity models. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

17.
In this paper we develop a latent structure extension of a commonly used structural time series model and use the model as a basis for forecasting. Each unobserved regime has its own unique slope and variances to describe the process generating the data, and at any given time period the model predicts a priori which regime best characterizes the data. This is accomplished by using a multinomial logit model in which the primary explanatory variable is a measure of how consistent each regime has been with recent observations. The model is especially well suited to forecasting series which are subject to frequent and/or major shocks. An application to nominal interest rates shows that the behaviour of the three‐month US Treasury bill rate is adequately explained by three regimes. The forecasting accuracy is superior to that produced by a traditional single‐regime model and a standard ARIMA model with a conditionally heteroscedastic error. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

18.
Asymmetry has been well documented in the business cycle literature. The asymmetric business cycle suggests that major macroeconomic series, such as a country's unemployment rate, are non‐linear and, therefore, the use of linear models to explain their behaviour and forecast their future values may not be appropriate. Many researchers have focused on providing evidence for the non‐linearity in the unemployment series. Only recently have there been some developments in applying non‐linear models to estimate and forecast unemployment rates. A major concern of non‐linear modelling is the model specification problem; it is very hard to test all possible non‐linear specifications, and to select the most appropriate specification for a particular model. Artificial neural network (ANN) models provide a solution to the difficulty of forecasting unemployment over the asymmetric business cycle. ANN models are non‐linear, do not rely upon the classical regression assumptions, are capable of learning the structure of all kinds of patterns in a data set with a specified degree of accuracy, and can then use this structure to forecast future values of the data. In this paper, we apply two ANN models, a back‐propagation model and a generalized regression neural network model to estimate and forecast post‐war aggregate unemployment rates in the USA, Canada, UK, France and Japan. We compare the out‐of‐sample forecast results obtained by the ANN models with those obtained by several linear and non‐linear times series models currently used in the literature. It is shown that the artificial neural network models are able to forecast the unemployment series as well as, and in some cases better than, the other univariate econometrics time series models in our test. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

19.
Hidden Markov models are often used to model daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time‐varying behavior have not been thoroughly examined. This paper presents an adaptive estimation approach that allows for the parameters of the estimated models to be time varying. It is shown that a two‐state Gaussian hidden Markov model with time‐varying parameters is able to reproduce the long memory of squared daily returns that was previously believed to be the most difficult fact to reproduce with a hidden Markov model. Capturing the time‐varying behavior of the parameters also leads to improved one‐step density forecasts. Finally, it is shown that the forecasting performance of the estimated models can be further improved using local smoothing to forecast the parameter variations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper we compare the in‐sample fit and out‐of‐sample forecasting performance of no‐arbitrage quadratic, essentially affine and dynamic Nelson–Siegel term structure models. In total, 11 model variants are evaluated, comprising five quadratic, four affine and two Nelson–Siegel models. Recursive re‐estimation and out‐of‐sample 1‐, 6‐ and 12‐month‐ahead forecasts are generated and evaluated using monthly US data for yields observed at maturities of 1, 6, 12, 24, 60 and 120 months. Our results indicate that quadratic models provide the best in‐sample fit, while the best out‐of‐sample performance is generated by three‐factor affine models and the dynamic Nelson–Siegel model variants. Statistical tests fail to identify one single best forecasting model class. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号