Nothing Special   »   [go: up one dir, main page]

IDEAS home Printed from https://ideas.repec.org/p/nbr/nberwo/14601.html
   My bibliography  Save this paper

Forecast Evaluation of Small Nested Model Sets

Author

Listed:
  • Kirstin Hubrich
  • Kenneth D. West
Abstract
We propose two new procedures for comparing the mean squared prediction error (MSPE) of a benchmark model to the MSPEs of a small set of alternative models that nest the benchmark. Our procedures compare the benchmark to all the alternative models simultaneously rather than sequentially, and do not require reestimation of models as part of a bootstrap procedure. Both procedures adjust MSPE differences in accordance with Clark and West (2007); one procedure then examines the maximum t-statistic, the other computes a chi-squared statistic. Our simulations examine the proposed procedures and two existing procedures that do not adjust the MSPE differences: a chi-squared statistic, and White's (2000) reality check. In these simulations, the two statistics that adjust MSPE differences have most accurate size, and the procedure that looks at the maximum t-statistic has best power. We illustrate our procedures by comparing forecasts of different models for U.S. inflation.

Suggested Citation

  • Kirstin Hubrich & Kenneth D. West, 2008. "Forecast Evaluation of Small Nested Model Sets," NBER Working Papers 14601, National Bureau of Economic Research, Inc.
  • Handle: RePEc:nbr:nberwo:14601
    Note: AP ME TWP
    as

    Download full text from publisher

    File URL: http://www.nber.org/papers/w14601.pdf
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. Orphanides, Athanasios & van Norden, Simon, 2005. "The Reliability of Inflation Forecasts Based on Output Gap Estimates in Real Time," Journal of Money, Credit and Banking, Blackwell Publishing, vol. 37(3), pages 583-601, June.
    2. Kirstin Hubrich & David F. Hendry, 2005. "Forecasting Aggregates by Disaggregates," Computing in Economics and Finance 2005 270, Society for Computational Economics.
    3. West, Kenneth D, 1996. "Asymptotic Inference about Predictive Ability," Econometrica, Econometric Society, vol. 64(5), pages 1067-1084, September.
    4. Todd Clark & Michael McCracken, 2005. "Evaluating Direct Multistep Forecasts," Econometric Reviews, Taylor & Francis Journals, vol. 24(4), pages 369-404.
    5. West, Kenneth D. & Cho, Dongchul, 1995. "The predictive ability of several models of exchange rate volatility," Journal of Econometrics, Elsevier, vol. 69(2), pages 367-391, October.
    6. Sarno, Lucio & Thornton, Daniel L & Valente, Giorgio, 2005. "Federal Funds Rate Prediction," Journal of Money, Credit and Banking, Blackwell Publishing, vol. 37(3), pages 449-471, June.
    7. Clark, Todd E. & West, Kenneth D., 2007. "Approximately normal tests for equal predictive accuracy in nested models," Journal of Econometrics, Elsevier, vol. 138(1), pages 291-311, May.
    8. Atsushi Inoue & Lutz Kilian, 2005. "In-Sample or Out-of-Sample Tests of Predictability: Which One Should We Use?," Econometric Reviews, Taylor & Francis Journals, vol. 23(4), pages 371-402.
    9. Hendry, David F. & Hubrich, Kirstin, 2011. "Combining Disaggregate Forecasts or Combining Disaggregate Information to Forecast an Aggregate," Journal of Business & Economic Statistics, American Statistical Association, vol. 29(2), pages 216-227.
    10. Inoue, Atsushi & Kilian, Lutz, 2006. "On the selection of forecasting models," Journal of Econometrics, Elsevier, vol. 130(2), pages 273-306, February.
    11. West, Kenneth D, 2001. "Tests for Forecast Encompassing When Forecasts Depend on Estimated Regression Parameters," Journal of Business & Economic Statistics, American Statistical Association, vol. 19(1), pages 29-33, January.
    12. Clark, Todd E. & West, Kenneth D., 2006. "Using out-of-sample mean squared prediction errors to test the martingale difference hypothesis," Journal of Econometrics, Elsevier, vol. 135(1-2), pages 155-186.
    13. Ashley, R & Granger, C W J & Schmalensee, R, 1980. "Advertising and Aggregate Consumption: An Analysis of Causality," Econometrica, Econometric Society, vol. 48(5), pages 1149-1167, July.
    14. Rapach, David E. & Wohar, Mark E., 2006. "In-sample vs. out-of-sample tests of stock return predictability in the context of data mining," Journal of Empirical Finance, Elsevier, vol. 13(2), pages 231-247, March.
    15. Todd E. Clark & Michael W. McCracken, 2010. "Reality checks and nested forecast model comparisons," Working Papers 2010-032, Federal Reserve Bank of St. Louis.
    16. Yongmiao Hong & Tae-Hwy Lee, 2003. "Inference on Predictability of Foreign Exchange Rates via Generalized Spectrum and Nonlinear Time Series Models," The Review of Economics and Statistics, MIT Press, vol. 85(4), pages 1048-1062, November.
    17. James H. Stock & Mark W. Watson, 2007. "Why Has U.S. Inflation Become Harder to Forecast?," Journal of Money, Credit and Banking, Blackwell Publishing, vol. 39(s1), pages 3-33, February.
    18. Hubrich, Kirstin, 2005. "Forecasting euro area inflation: Does aggregating forecasts by HICP component improve forecast accuracy?," International Journal of Forecasting, Elsevier, vol. 21(1), pages 119-136.
    19. McCracken, Michael W., 2007. "Asymptotics for out of sample tests of Granger causality," Journal of Econometrics, Elsevier, vol. 140(2), pages 719-752, October.
    20. West, Kenneth D. & Edison, Hali J. & Cho, Dongchul, 1993. "A utility-based comparison of some models of exchange rate volatility," Journal of International Economics, Elsevier, vol. 35(1-2), pages 23-45, August.
    21. Halbert White, 2000. "A Reality Check for Data Snooping," Econometrica, Econometric Society, vol. 68(5), pages 1097-1126, September.
    22. Lutkepohl, Helmut, 1984. "Forecasting Contemporaneously Aggregated Vector ARMA Processes," Journal of Business & Economic Statistics, American Statistical Association, vol. 2(3), pages 201-214, July.
    23. Mr. Andreas Billmeier, 2004. "Ghostbusting: Which Output Gap Measure Really Matters?," IMF Working Papers 2004/146, International Monetary Fund.
    24. Raffaella Giacomini & Halbert White, 2006. "Tests of Conditional Predictive Ability," Econometrica, Econometric Society, vol. 74(6), pages 1545-1578, November.
    25. Clark, Todd E. & McCracken, Michael W., 2001. "Tests of equal forecast accuracy and encompassing for nested models," Journal of Econometrics, Elsevier, vol. 105(1), pages 85-110, November.
    26. Hansen, Peter Reinhard, 2005. "A Test for Superior Predictive Ability," Journal of Business & Economic Statistics, American Statistical Association, vol. 23, pages 365-380, October.
    27. Diebold, Francis X & Mariano, Roberto S, 2002. "Comparing Predictive Accuracy," Journal of Business & Economic Statistics, American Statistical Association, vol. 20(1), pages 134-144, January.
    28. David Harvey & Paul Newbold, 2000. "Tests for multiple forecast encompassing," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 15(5), pages 471-482.
    29. Andrew Atkeson & Lee E. Ohanian, 2001. "Are Phillips curves useful for forecasting inflation?," Quarterly Review, Federal Reserve Bank of Minneapolis, vol. 25(Win), pages 2-11.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Todd E. Clark & Michael W. McCracken, 2010. "Reality checks and nested forecast model comparisons," Working Papers 2010-032, Federal Reserve Bank of St. Louis.
    2. West, Kenneth D., 2006. "Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 1, chapter 3, pages 99-134, Elsevier.
    3. Granziera, Eleonora & Hubrich, Kirstin & Moon, Hyungsik Roger, 2014. "A predictability test for a small number of nested models," Journal of Econometrics, Elsevier, vol. 182(1), pages 174-185.
    4. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    5. Raffaella Giacomini & Barbara Rossi, 2013. "Forecasting in macroeconomics," Chapters, in: Nigar Hashimzade & Michael A. Thornton (ed.), Handbook of Research Methods and Applications in Empirical Macroeconomics, chapter 17, pages 381-408, Edward Elgar Publishing.
    6. Clark, Todd E. & West, Kenneth D., 2007. "Approximately normal tests for equal predictive accuracy in nested models," Journal of Econometrics, Elsevier, vol. 138(1), pages 291-311, May.
    7. Rossi, Barbara, 2013. "Advances in Forecasting under Instability," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1203-1324, Elsevier.
    8. Busetti, Fabio & Marcucci, Juri, 2013. "Comparing forecast accuracy: A Monte Carlo investigation," International Journal of Forecasting, Elsevier, vol. 29(1), pages 13-27.
    9. Calhoun, Gray, 2014. "Out-Of-Sample Comparisons of Overfit Models," Staff General Research Papers Archive 32462, Iowa State University, Department of Economics.
    10. Brooks, Chris & Burke, Simon P. & Stanescu, Silvia, 2016. "Finite sample weighting of recursive forecast errors," International Journal of Forecasting, Elsevier, vol. 32(2), pages 458-474.
    11. Mariano, Roberto S. & Preve, Daniel, 2012. "Statistical tests for multiple forecast comparison," Journal of Econometrics, Elsevier, vol. 169(1), pages 123-130.
    12. Todd E. Clark & Kenneth D. West, 2005. "Using Out-of-Sample Mean Squared Prediction Errors to Test the Martingale Difference," NBER Technical Working Papers 0305, National Bureau of Economic Research, Inc.
    13. Molodtsova, Tanya & Papell, David H., 2009. "Out-of-sample exchange rate predictability with Taylor rule fundamentals," Journal of International Economics, Elsevier, vol. 77(2), pages 167-180, April.
    14. Pablo Pincheira Brown, 2022. "A Power Booster Factor for Out-of-Sample Tests of Predictability," Revista Economía, Fondo Editorial - Pontificia Universidad Católica del Perú, vol. 45(89), pages 150-183.
    15. Clark, Todd E. & McCracken, Michael W., 2015. "Nested forecast model comparisons: A new approach to testing equal accuracy," Journal of Econometrics, Elsevier, vol. 186(1), pages 160-177.
    16. Todd E. Clark & Michael W. McCracken, 2010. "Testing for unconditional predictive ability," Working Papers 2010-031, Federal Reserve Bank of St. Louis.
    17. Peter Reinhard Hansen & Allan Timmermann, 2015. "Comment," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 33(1), pages 17-21, January.
    18. Ahmed, Shamim & Liu, Xiaoquan & Valente, Giorgio, 2016. "Can currency-based risk factors help forecast exchange rates?," International Journal of Forecasting, Elsevier, vol. 32(1), pages 75-97.
    19. Richard A. Ashley & Kwok Ping Tsang, 2014. "Credible Granger-Causality Inference with Modest Sample Lengths: A Cross-Sample Validation Approach," Econometrics, MDPI, vol. 2(1), pages 1-20, March.
    20. Barbara Rossi & Atsushi Inoue, 2012. "Out-of-Sample Forecast Tests Robust to the Choice of Window Size," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 30(3), pages 432-453, April.

    More about this item

    JEL classification:

    • C32 - Mathematical and Quantitative Methods - - Multiple or Simultaneous Equation Models; Multiple Variables - - - Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models; Diffusion Processes; State Space Models
    • C53 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Forecasting and Prediction Models; Simulation Methods
    • E37 - Macroeconomics and Monetary Economics - - Prices, Business Fluctuations, and Cycles - - - Forecasting and Simulation: Models and Applications

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nbr:nberwo:14601. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: the person in charge (email available below). General contact details of provider: https://edirc.repec.org/data/nberrus.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.