首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper examines the relative importance of allowing for time‐varying volatility and country interactions in a forecast model of economic activity. Allowing for these issues is done by augmenting autoregressive models of growth with cross‐country weighted averages of growth and the generalized autoregressive conditional heteroskedasticity framework. The forecasts are evaluated using statistical criteria through point and density forecasts, and an economic criterion based on forecasting recessions. The results show that, compared to an autoregressive model, both components improve forecast ability in terms of point and density forecasts, especially one‐period‐ahead forecasts, but that the forecast ability is not stable over time. The random walk model, however, still dominates in terms of forecasting recessions.  相似文献   

2.
This study investigates whether human judgement can be of value to users of industrial learning curves, either alone or in conjunction with statistical models. In a laboratory setting, it compares the forecast accuracy of a statistical model and judgemental forecasts, contingent on three factors: the amount of data available prior to forecasting, the forecasting horizon, and the availability of a decision aid (projections from a fitted learning curve). The results indicate that human judgement was better than the curve forecasts overall. Despite their lack of field experience with learning curve use, 52 of the 79 subjects outperformed the curve on the set of 120 forecasts, based on mean absolute percentage error. Human performance was statistically superior to the model when few data points were available and when forecasting further into the future. These results indicate substantial potential for human judgement to improve predictive accuracy in the industrial learning‐curve context. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

3.
We employ 47 different algorithms to forecast Australian log real house prices and growth rates, and compare their ability to produce accurate out-of-sample predictions. The algorithms, which are specified in both single- and multi-equation frameworks, consist of traditional time series models, machine learning (ML) procedures, and deep learning neural networks. A method is adopted to compute iterated multistep forecasts from nonlinear ML specifications. While the rankings of forecast accuracy depend on the length of the forecast horizon, as well as on the choice of the dependent variable (log price or growth rate), a few generalizations can be made. For one- and two-quarter-ahead forecasts we find a large number of algorithms that outperform the random walk with drift benchmark. We also report several such outperformances at longer horizons of four and eight quarters, although these are not statistically significant at any conventional level. Six of the eight top forecasts (4 horizons × 2 dependent variables) are generated by the same algorithm, namely a linear support vector regressor (SVR). The other two highest ranked forecasts are produced as simple mean forecast combinations. Linear autoregressive moving average and vector autoregression models produce accurate olne-quarter-ahead predictions, while forecasts generated by deep learning nets rank well across medium and long forecast horizons.  相似文献   

4.
This paper investigates whether some forecasters consistently outperform others using Japanese CPI forecast data of 42 forecasters over the past 18 quarters. It finds that the accuracy rankings of 0, 1, 2, and 5‐month forecasts are significantly different from those that might be expected when all forecasters had equal forecasting ability. Moreover, their rankings of the relative forecast levels are also significantly different from a random one. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
In this paper we present results of a simulation study to assess and compare the accuracy of forecasting techniques for long‐memory processes in small sample sizes. We analyse differences between adaptive ARMA(1,1) L‐step forecasts, where the parameters are estimated by minimizing the sum of squares of L‐step forecast errors, and forecasts obtained by using long‐memory models. We compare widths of the forecast intervals for both methods, and discuss some computational issues associated with the ARMA(1,1) method. Our results illustrate the importance and usefulness of long‐memory models for multi‐step forecasting. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

6.
In this paper, we forecast real house price growth of 16 OECD countries using information from domestic macroeconomic indicators and global measures of the housing market. Consistent with the findings for the US housing market, we find that the forecasts from an autoregressive model dominate the forecasts from the random walk model for most of the countries in our sample. More importantly, we find that the forecasts from a bivariate model that includes economically important domestic macroeconomic variables and two global indicators of the housing market significantly improve upon the univariate autoregressive model forecasts. Among all the variables, the mean square forecast error from the model with the country's domestic interest rates has the best performance for most of the countries. The country's income, industrial production, and stock markets are also found to have valuable information about the future movements in real house price growth. There is also some evidence supporting the influence of the global housing price growth in out‐of‐sample forecasting of real house price growth in these OECD countries.  相似文献   

7.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
A new method is proposed for forecasting electricity load-duration curves. The approach first forecasts the load curve and then uses the resulting predictive densities to forecast the load-duration curve. A virtue of this procedure is that both load curves and load-duration curves can be predicted using the same model, and confidence intervals can be generated for both predictions. The procedure is applied to the problem of predicting New Zealand electricity consumption. A structural time-series model is used to forecast the load curve based on half-hourly data. The model is tailored to handle effects such as daylight savings, holidays and weekends, as well as trend, annual, weekly and daily cycles. Time-series methods, including Kalman filtering, smoothing and prediction, are used to fit the model and to achieve the desired forecasts of the load-duration curve.  相似文献   

9.
This paper describes procedures for forecasting countries' output growth rates and medians of a set of output growth rates using Hierarchical Bayesian (HB) models. The purpose of this paper is to show how the γ‐shrinkage forecast of Zellner and Hong ( 1989 ) emerges from a hierarchical Bayesian model and to describe how the Gibbs sampler can be used to fit this model to yield possibly improved output growth rate and median output growth rate forecasts. The procedures described in this paper offer two primary methodological contributions to previous work on this topic: (1) the weights associated with widely‐used shrinkage forecasts are determined endogenously, and (2) the posterior predictive density of the future median output growth rate is obtained numerically from which optimal point and interval forecasts are calculated. Using IMF data, we find that the HB median output growth rate forecasts outperform forecasts obtained from variety of benchmark models. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

10.
We investigate the realized volatility forecast of stock indices under the structural breaks. We utilize a pure multiple mean break model to identify the possibility of structural breaks in the daily realized volatility series by employing the intraday high‐frequency data of the Shanghai Stock Exchange Composite Index and the five sectoral stock indices in Chinese stock markets for the period 4 January 2000 to 30 December 2011. We then conduct both in‐sample tests and out‐of‐sample forecasts to examine the effects of structural breaks on the performance of ARFIMAX‐FIGARCH models for the realized volatility forecast by utilizing a variety of estimation window sizes designed to accommodate potential structural breaks. The results of the in‐sample tests show that there are multiple breaks in all realized volatility series. The results of the out‐of‐sample point forecasts indicate that the combination forecasts with time‐varying weights across individual forecast models estimated with different estimation windows perform well. In particular, nonlinear combination forecasts with the weights chosen based on a non‐parametric kernel regression and linear combination forecasts with the weights chosen based on the non‐negative restricted least squares and Schwarz information criterion appear to be the most accurate methods in point forecasting for realized volatility under structural breaks. We also conduct an interval forecast of the realized volatility for the combination approaches, and find that the interval forecast for nonlinear combination approaches with the weights chosen according to a non‐parametric kernel regression performs best among the competing models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
In attempting to improve forecasting, many facets of the forecasting process may be addressed including techniques, psychological factors, and organizational factors. This research examines whether a robust psychological bias (anchoring and adjustment) can be observed in a set of organizationally-produced forecasts. Rather than a simple consistent bias, biases were found to vary across organizations and items being forecast. Such bias patterns suggest that organizational factors may be important in determining the biases found in organizationally-produced forecasts.  相似文献   

12.
A variety of recent studies provide a skeptical view on the predictability of stock returns. Empirical evidence shows that most prediction models suffer from a loss of information, model uncertainty, and structural instability by relying on low‐dimensional information sets. In this study, we evaluate the predictive ability of various lately refined forecasting strategies, which handle these issues by incorporating information from many potential predictor variables simultaneously. We investigate whether forecasting strategies that (i) combine information and (ii) combine individual forecasts are useful to predict US stock returns, that is, the market excess return, size, value, and the momentum premium. Our results show that methods combining information have remarkable in‐sample predictive ability. However, the out‐of‐sample performance suffers from highly volatile forecast errors. Forecast combinations face a better bias–efficiency trade‐off, yielding a consistently superior forecast performance for the market excess return and the size premium even after the 1970s.  相似文献   

13.
The ability to improve out-of-sample forecasting performance by combining forecasts is well established in the literature. This paper advances this literature in the area of multivariate volatility forecasts by developing two combination weighting schemes that exploit volatility persistence to emphasise certain losses within the combination estimation period. A comprehensive empirical analysis of the out-of-sample forecast performance across varying dimensions, loss functions, sub-samples and forecast horizons show that new approaches significantly outperform their counterparts in terms of statistical accuracy. Within the financial applications considered, significant benefits from combination forecasts relative to the individual candidate models are observed. Although the more sophisticated combination approaches consistently rank higher relative to the equally weighted approach, their performance is statistically indistinguishable given the relatively low power of these loss functions. Finally, within the applications, further analysis highlights how combination forecasts dramatically reduce the variability in the parameter of interest, namely the portfolio weight or beta.  相似文献   

14.
Conventional wisdom holds that restrictions on low‐frequency dynamics among cointegrated variables should provide more accurate short‐ to medium‐term forecasts than univariate techniques that contain no such information; even though, on standard accuracy measures, the information may not improve long‐term forecasting. But inconclusive empirical evidence is complicated by confusion about an appropriate accuracy criterion and the role of integration and cointegration in forecasting accuracy. We evaluate the short‐ and medium‐term forecasting accuracy of univariate Box–Jenkins type ARIMA techniques that imply only integration against multivariate cointegration models that contain both integration and cointegration for a system of five cointegrated Asian exchange rate time series. We use a rolling‐window technique to make multiple out of sample forecasts from one to forty steps ahead. Relative forecasting accuracy for individual exchange rates appears to be sensitive to the behaviour of the exchange rate series and the forecast horizon length. Over short horizons, ARIMA model forecasts are more accurate for series with moving‐average terms of order >1. ECMs perform better over medium‐term time horizons for series with no moving average terms. The results suggest a need to distinguish between ‘sequential’ and ‘synchronous’ forecasting ability in such comparisons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

15.
Time-series data are often contaminated with outliers due to the influence of unusual and non-repetitive events. Forecast accuracy in such situations is reduced due to (1) a carry-over effect of the outlier on the point forecast and (2) a bias in the estimates of model parameters. Hillmer (1984) and Ledolter (1989) studied the effect of additive outliers on forecasts. It was found that forecast intervals are quite sensitive to additive outliers, but that point forecasts are largely unaffected unless the outlier occurs near the forecast origin. In such a situation the carry-over effect of the outlier can be quite substantial. In this study, we investigate the issues of forecasting when outliers occur near or at the forecast origin. We propose a strategy which first estimates the model parameters and outlier effects using the procedure of Chen and Liu (1993) to reduce the bias in the parameter estimates, and then uses a lower critical value to detect outliers near the forecast origin in the forecasting stage. One aspect of this study is on the carry-over effects of outliers on forecasts. Four types of outliers are considered: innovational outlier, additive outlier, temporary change, and level shift. The effects due to a misidentification of an outlier type are examined. The performance of the outlier detection procedure is studied for cases where outliers are near the end of the series. In such cases, we demonstrate that statistical procedures may not be able to effectively determine the outlier types due to insufficient information. Some strategies are recommended to reduce potential difficulties caused by incorrectly detected outlier types. These findings may serve as a justification for forecasting in conjunction with judgment. Two real examples are employed to illustrate the issues discussed.  相似文献   

16.
The recent experience of macroeconomic forecasting in the United Kingdom has prompted renewed interest in the evaluation of economic forecasts. This paper uses cointegration tests to investigate what can be learnt from the forecasts produced by the National Institute of Economic and Social Research (NIESR) over the last two decades. Whilst the forecasts and outturns are found to be cointegrated, there remains evidence of systematic relationships between a number of forecast errors. Our results also fail to reject non-cointegration between different vintages of data, suggesting that considerable care should be exercised in both the choice of realisation data used and in the means by which efficiency is tested.  相似文献   

17.
In a conditional predictive ability test framework, we investigate whether market factors influence the relative conditional predictive ability of realized measures (RMs) and implied volatility (IV), which is able to examine the asynchronism in their forecasting accuracy, and further analyze their unconditional forecasting performance for volatility forecast. Our results show that the asynchronism can be detected significantly and is strongly related to certain market factors, and the comparison between RMs and IV on average forecast performance is more efficient than previous studies. Finally, we use the factors to extend the empirical similarity (ES) approach for combination of forecasts derived from RMs and IV.  相似文献   

18.
Singular spectrum analysis (SSA) is a powerful nonparametric method in the area of time series analysis that has shown its capability in different applications areas. SSA depends on two main choices: the window length L and the number of eigentriples used for grouping r. One of the most important issues when analyzing time series is the forecast of new observations. When using SSA for time series forecasting there are several alternative algorithms, the most widely used being the recurrent forecasting model, which assumes that a given observation can be written as a linear combination of the L?1 previous observations. However, when the window length L is large, the forecasting model is unlikely to be parsimonious. In this paper we propose a new parsimonious recurrent forecasting model that uses an optimal m(<L?1) coefficients in the linear combination of the recurrent SSA. Our results support the idea of using this new parsimonious recurrent forecasting model instead of the standard recurrent SSA forecasting model.  相似文献   

19.
This paper compares the experience of forecasting the UK government bond yield curve before and after the dramatic lowering of short‐term interest rates from October 2008. Out‐of‐sample forecasts for 1, 6 and 12 months are generated from each of a dynamic Nelson–Siegel model, autoregressive models for both yields and the principal components extracted from those yields, a slope regression and a random walk model. At short forecasting horizons, there is little difference in the performance of the models both prior to and after 2008. However, for medium‐ to longer‐term horizons, the slope regression provided the best forecasts prior to 2008, while the recent experience of near‐zero short interest rates coincides with a period of forecasting superiority for the autoregressive and dynamic Nelson–Siegel models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
Forecasts of interest rates for different maturities are essential for forecasts of asset prices. The growth of derivatives markets coupled with the development of complex theories of the term structure of interest rates have provided forecasters with a rich array of variables for predicting interest rates and yield spreads. This paper extends previous work on forecasting future interest rates and yield spreads using market data for T-bills, T-Notes, and Treasury Bond spot and futures contracts. The information conveyed in technical models that use market data is also assessed, using a recent innovation in interest rate modelling, the maximum smoothness approach. Forecasts from this model are compared with predicted yields and yield spreads derived from futures prices as well as with those of the random walk model. The results show some evidence of market segmentation, with more arbitrage evident for nearby maturities. Market participants appear to show a greater degree of consensus on short-term interest rates than on longer-term interest rates. There is some indication that forecasts from the futures markets are marginally better than those provided by those of the maximum-smoothness approach, consistent with the informational advantages of futures markets. Finally, futures and maximum-smoothness market forecasts are shown to outperform those of the random walk model.© 1997 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号