Forecast Evaluation with Shared Data Sets
24 Pages Posted: 2 Dec 2001
Date Written: November 2001
Abstract
Data sharing is common practice in forecasting experiments in situations where fresh data samples are difficult or expensive to generate. This means that forecasters often analyze the same data set using a host of different models and sets of explanatory variables. This practice introduces statistical dependencies across forecasting studies that can severely distort statistical inference. Here we examine a new and inexpensive recursive bootstrap procedure that allows forecasters to account explicitly for these dependencies. The procedure allows forecasters to merge empirical evidence and draw inference in the light of previously accumulated results. In an empirical example, we merge results from predictions of daily stock prices based on (1) technical trading rules and (2) calendar rules, demonstrating both the significance of problems arising from data sharing and the simplicity of accounting for data sharing using these new methods.
Keywords: Forecast evaluation, bootstrap, data sharing, calendar effects, technical trading
JEL Classification: C10
Suggested Citation: Suggested Citation
Do you have negative results from your research you’d like to share?
Recommended Papers
-
Tests of Equal Forecast Accuracy and Encompassing for Nested Models
-
Long Swings in the Exchange Rate: are They in the Data and Do Markets Know it?
-
Exchange Rates and Fundamentals
By Charles M. Engel and Kenneth D. West
-
Exchange Rates and Fundamentals
By Charles M. Engel and Kenneth D. West
-
Empirical Exchange Rate Models of the Nineties: Are Any Fit to Survive?
-
Exchange Rates and Monetary Fundamentals: What Do We Learn from Long-Horizon Regressions?
By Lutz Kilian
-
Empirical Exchange Rate Models of the Nineties: Are Any Fit to Survive?
By Yin-wong Cheung, Menzie David Chinn, ...