首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper, we assess the predictive content of latent economic policy uncertainty and data surprise factors for forecasting and nowcasting gross domestic product (GDP) using factor-type econometric models. Our analysis focuses on five emerging market economies: Brazil, Indonesia, Mexico, South Africa, and Turkey; and we carry out a forecasting horse race in which predictions from various different models are compared. These models may (or may not) contain latent uncertainty and surprise factors constructed using both local and global economic datasets. The set of models that we examine in our experiments includes both simple benchmark linear econometric models as well as dynamic factor models that are estimated using a variety of frequentist and Bayesian data shrinkage methods based on the least absolute shrinkage operator (LASSO). We find that the inclusion of our new uncertainty and surprise factors leads to superior predictions of GDP growth, particularly when these latent factors are constructed using Bayesian variants of the LASSO. Overall, our findings point to the importance of spillover effects from global uncertainty and data surprises, when predicting GDP growth in emerging market economies.  相似文献   

2.
This paper explores the role of business cycle proxies, measured by the output gap at the global, regional, and local levels, as potential predictors of stock market volatility in the emerging BRICS nations. We observe that the emerging BRICS nations display a rather heterogeneous pattern when it comes to the relative role of idiosyncratic factors as a predictor of stock market volatility. While domestic output gap is found to capture significant predictive information for India and China particularly, the business cycles associated with emerging economies and the world in general are strongly important for the BRIC countries and weakly for South Africa, especially in the postglobal financial crisis era. The findings suggest that despite the increase in the financial integration of world capital markets, emerging economies can still bear significant exposures to idiosyncratic risk factors, an issue of high importance for the profitability of global diversification strategies.  相似文献   

3.
This paper compares various ways of extracting macroeconomic information from a data‐rich environment for forecasting the yield curve using the Nelson–Siegel model. Five issues in extracting factors from a large panel of macro variables are addressed; namely, selection of a subset of the available information, incorporation of the forecast objective in constructing factors, specification of a multivariate forecast objective, data grouping before constructing factors, and selection of the number of factors in a data‐driven way. Our empirical results show that each of these features helps to improve forecast accuracy, especially for the shortest and longest maturities. Factor‐augmented methods perform well in relatively volatile periods, including the crisis period in 2008–9, when simpler models do not suffice. The macroeconomic information is exploited best by partial least squares methods, with principal component methods ranking second best. Reductions of mean squared prediction errors of 20–30% are attained, compared to the Nelson–Siegel model without macro factors. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
We utilize mixed‐frequency factor‐MIDAS models for the purpose of carrying out backcasting, nowcasting, and forecasting experiments using real‐time data. We also introduce a new real‐time Korean GDP dataset, which is the focus of our experiments. The methodology that we utilize involves first estimating common latent factors (i.e., diffusion indices) from 190 monthly macroeconomic and financial series using various estimation strategies. These factors are then included, along with standard variables measured at multiple different frequencies, in various factor‐MIDAS prediction models. Our key empirical findings as follows. (i) When using real‐time data, factor‐MIDAS prediction models outperform various linear benchmark models. Interestingly, the “MSFE‐best” MIDAS models contain no autoregressive (AR) lag terms when backcasting and nowcasting. AR terms only begin to play a role in “true” forecasting contexts. (ii) Models that utilize only one or two factors are “MSFE‐best” at all forecasting horizons, but not at any backcasting and nowcasting horizons. In these latter contexts, much more heavily parametrized models with many factors are preferred. (iii) Real‐time data are crucial for forecasting Korean gross domestic product, and the use of “first available” versus “most recent” data “strongly” affects model selection and performance. (iv) Recursively estimated models are almost always “MSFE‐best,” and models estimated using autoregressive interpolation dominate those estimated using other interpolation methods. (v) Factors estimated using recursive principal component estimation methods have more predictive content than those estimated using a variety of other (more sophisticated) approaches. This result is particularly prevalent for our “MSFE‐best” factor‐MIDAS models, across virtually all forecast horizons, estimation schemes, and data vintages that are analyzed.  相似文献   

5.
This paper subjects six alternative indicators of global economic activity to empirically examine their relative predictive powers in the forecast of crude oil market volatility. GARCH-MIDAS approach is constructed to accommodate all the relevant series at their available data frequencies, thereby circumventing information loss and any associated bias. We find evidence in support of global economic activity as a good predictor of energy market volatility. Our forecast evaluation of the various indicators places a higher weight on the newly developed indicator of global economic activity which is based on a set of 16 variables covering multiple dimensions of the global economy, whereas other indicators do not seem to capture. Furthermore, we find that accounting for any inherent asymmetry in the global economic activity proxies improves the forecast accuracy of the GARCH-MIDAS-X model for oil volatility. The results leading to these conclusions are robust to multiple forecast horizons and consistent across alternative energy sources.  相似文献   

6.
This paper proposes a strategy to detect the presence of common serial cor‐ relation in large‐dimensional systems. We show that partial least squares can be used to consistently recover the common autocorrelation space. Moreover, a Monte Carlo study reveals that univariate autocorrelation tests on the factors obtained by partial least squares outperform traditional tests based on canonical correlation analysis. Some empirical applications are presented to illustrate concepts and methods. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
This paper investigates the transmission patterns of stock market movements between developed and emerging market economies by estimating a four‐variable VAR model. The underlying economic fundamentals and trade links are considered as possible determinants of differences in transmission patterns. The results of the impulse response functions and variance decompositions indicate that significant links exist between the stock markets of the USA and Mexico and weaker links between the markets of the USA, Argentina, and Brazil. Differences in the patterns of stock market responses are consistent with differences in trade flows. The response of emerging markets to a shock to the US market lasts longer than that of a developed market such as the UK. While no single emerging market can affect the US stock market, the combined effect of emerging markets on the US stock market is found to be statistically significant. These findings can be linked to differences in the speed of information processing and to the institutional structure governing the market. Overall the findings suggest that the transmission of stock market movements is in accord with underlying economic fundamentals rather than irrational contagion effects. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

8.
This study is devoted to gain insight into a timely, accurate, and relevant combining forecast by considering social media (Facebook), opinion polls, and prediction markets. We transformed each type of raw data into the possibility of victory as a forecasting model. Besides the four single forecasts, namely Facebook fans, Facebook “people talking about this” (PTAT) statistics, opinion polls, and prediction markets, we generated three combined forecasts by associating various combinations of the four components. Then, we examined the predictive performance of each forecast on vote shares and the elected/non‐elected outcome across the election period. Our findings, based on the evidence of Taiwan's 2018 county and city elections, showed that incorporating the Facebook PTAT statistic with polls and prediction markets generates the most powerful forecast. Moreover, we recognized the matter of the time horizons where the best proposed model has better accuracy gains in prediction—in the “late of election,” but not in “approaching election”. The patterns of the trend of accuracy across time for each forecasting model also differ from one another. We also highlighted the complementarity of various types of data in the paper because each forecast makes important contributions to forecasting elections.  相似文献   

9.
10.
It is widely acknowledged that the patient's perspective should be considered when making decisions about how her care will be managed. Patient participation in the decision making process may play an important role in bringing to light and incorporating her perspective. The GRADE framework is touted as an evidence-based process for determining recommendations for clinical practice; i.e. determining how care ought to be managed. GRADE recommendations are categorized as “strong” or “weak” based on several factors, including the “values and preferences” of a “typical” patient. The strength of the recommendation also provides instruction to the clinician about when and how patients should participate in the clinical encounter, and thus whether an individual patient's values and preferences will be heard in her clinical encounter. That is, a “strong” recommendation encourages “paternalism” and a “weak” recommendation encourages shared decision making. We argue that adoption of the GRADE framework is problematic to patient participation and may result in care that is not respectful of the individual patient's values and preferences. We argue that the root of the problem is the conception of “values and preferences” in GRADE – the framework favours population thinking (e.g. “typical” patient “values and preferences”), despite the fact that “values and preferences” are individual in the sense that they are deeply personal. We also show that tying the strength of a recommendation to a model of decision making (paternalism or shared decision making) constrains patient participation and is not justified (theoretically and/or empirically) in the GRADE literature.  相似文献   

11.
Dynamic model averaging (DMA) is used extensively for the purpose of economic forecasting. This study extends the framework of DMA by introducing adaptive learning from model space. In the conventional DMA framework all models are estimated independently and hence the information of the other models is left unexploited. In order to exploit the information in the estimation of the individual time‐varying parameter models, this paper proposes not only to average over the forecasts but, in addition, also to dynamically average over the time‐varying parameters. This is done by approximating the mixture of individual posteriors with a single posterior, which is then used in the upcoming period as the prior for each of the individual models. The relevance of this extension is illustrated in three empirical examples involving forecasting US inflation, US consumption expenditures, and forecasting of five major US exchange rate returns. In all applications adaptive learning from model space delivers improvements in out‐of‐sample forecasting performance.  相似文献   

12.
Successful market timing strategies depend on superior forecasting ability. We use a sentiment index model, a kitchen sink logistic regression model, and a machine learning model (least absolute shrinkage and selection operator, LASSO) to forecast 1‐month‐ahead S&P 500 Index returns. In order to determine how successful each strategy is at forecasting the market direction, a “beta optimization” strategy is implemented. We find that the LASSO model outperforms the other models with consistently higher annual returns and lower monthly drawdowns.  相似文献   

13.
In 1985, more than thirty geomorphologists, planetary scientists, and remote sensing specialists gathered at a conference center in Oracle, Arizona, to discuss an emerging area of research that they called “mega-geomorphology.” Building on a conference of the same name held in London in 1981, they argued that new techniques of remote sensing and insights emerging from the study of extraterrestrial planets had created opportunities for geomorphology to broaden its spatial and temporal scope. This new approach was, however, neither unproblematic nor uncontested. In the discussions around mega-geomorphology that took place in the mid-1980s, the perceived conflict between the use of remote-sensing techniques to observe phenomena on vast spatial scales, on one hand, and the disciplinary centrality of fieldwork and field experience to geomorphology, on the other, was a recurrent theme. In response, mega-geomorphologists attempted to re-situate fieldwork and re-narrate disciplinary histories in such a way as to make remote sensing and planetary science not only compatible with geomorphological traditions but also means of revitalizing them. Only partially successful, these attempts reveal that the process of adopting a planetary perspective in geomorphology, as in other earth sciences, was neither straightforward nor inevitable. They also show how the field and fieldwork could remain central to geomorphology while also being extensively revised in light of new technical possibilities and theoretical frameworks.  相似文献   

14.
We investigate the realized volatility forecast of stock indices under the structural breaks. We utilize a pure multiple mean break model to identify the possibility of structural breaks in the daily realized volatility series by employing the intraday high‐frequency data of the Shanghai Stock Exchange Composite Index and the five sectoral stock indices in Chinese stock markets for the period 4 January 2000 to 30 December 2011. We then conduct both in‐sample tests and out‐of‐sample forecasts to examine the effects of structural breaks on the performance of ARFIMAX‐FIGARCH models for the realized volatility forecast by utilizing a variety of estimation window sizes designed to accommodate potential structural breaks. The results of the in‐sample tests show that there are multiple breaks in all realized volatility series. The results of the out‐of‐sample point forecasts indicate that the combination forecasts with time‐varying weights across individual forecast models estimated with different estimation windows perform well. In particular, nonlinear combination forecasts with the weights chosen based on a non‐parametric kernel regression and linear combination forecasts with the weights chosen based on the non‐negative restricted least squares and Schwarz information criterion appear to be the most accurate methods in point forecasting for realized volatility under structural breaks. We also conduct an interval forecast of the realized volatility for the combination approaches, and find that the interval forecast for nonlinear combination approaches with the weights chosen according to a non‐parametric kernel regression performs best among the competing models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
Micro panels characterized by large numbers of individuals observed over a short time period provide a rich source of information, but as yet there is only limited experience in using such data for forecasting. Existing simulation evidence supports the use of a fixed‐effects approach when forecasting but it is not based on a truly micro panel set‐up. In this study, we exploit the linkage of a representative survey of more than 250,000 Australians aged 45 and over to 4 years of hospital, medical and pharmaceutical records. The availability of panel health cost data allows the use of predictors based on fixed‐effects estimates designed to guard against possible omitted variable biases associated with unobservable individual specific effects. We demonstrate the preference towards fixed‐effects‐based predictors is unlikely to hold in many practical situations, including our models of health care costs. Simulation evidence with a micro panel set‐up adds support and additional insights to the results obtained in the application. These results are supportive of the use of the ordinary least squares predictor in a wide range of circumstances. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
17.
It is proved that formula for least squares extrapolation in stationary non‐linear AR(1) process is valid also for non‐stationary non‐linear AR(1) processes. This formula depends on the distribution of the corresponding white noise. If the non‐linear function used in the model is non‐decreasing and concave, upper and lower bounds are derived for least squares extrapolation such that the bounds depend only on the expectation of the white noise. It is shown in an example that the derived bounds in some cases give a good approximation to the least squares extrapolation. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

18.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

19.
This paper investigates how and when pairs of terms such as “local–global” and “im Kleinenim Grossen” began to be used by mathematicians as explicit reflexive categories. A first phase of automatic search led to the delineation of the relevant corpus, and to the identification of the period from 1898 to 1918 as that of emergence. The emergence appears to have been, from the very start, both transdisciplinary (function theory, calculus of variations, differential geometry) and international, although the AMS-Göttingen connection played a specific part. First used as an expository and didactic tool (e.g. by Osgood), it soon played a crucial part in the creation of new mathematical concepts (e.g. in Hahn’s work), in the shaping of research agendas (e.g. Blaschke’s global differential geometry), and in Weyl’s axiomatic foundation of the manifold concept. We finally turn to France, where in the 1910s, in the wake of Poincaré’s work, Hadamard began to promote a research agenda in terms of “passage du local au general.”  相似文献   

20.
This intention of this paper is to empirically forecast the daily betas of a few European banks by means of four generalized autoregressive conditional heteroscedasticity (GARCH) models and the Kalman filter method during the pre‐global financial crisis period and the crisis period. The four GARCH models employed are BEKK GARCH, DCC GARCH, DCC‐MIDAS GARCH and Gaussian‐copula GARCH. The data consist of daily stock prices from 2001 to 2013 from two large banks each from Austria, Belgium, Greece, Holland, Ireland, Italy, Portugal and Spain. We apply the rolling forecasting method and the model confidence sets (MCS) to compare the daily forecasting ability of the five models during one month of the pre‐crisis (January 2007) and the crisis (January 2013) periods. Based on the MCS results, the BEKK proves the best model in the January 2007 period, and the Kalman filter overly outperforms the other models during the January 2013 period. Results have implications regarding the choice of model during different periods by practitioners and academics. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号