Occasionally I will use this blog to raise issues about economic forecasting that I think are important but as yet have not been fully resolved. The one that I want to address today concerns our failure to predict cyclical peaks. Most forecast evaluations have shown that recessions are not predicted in advance and sometimes not recognized for a considerable period of time. Unfortunately most evaluations merely describe the characteristics of the predictive errors and thus fail to determine why they occurred. I know of at least three possible explanations for this predictive failure: bad GDP data, asymmetric loss functions, and too much emphasis on quantitative models at the expense of indicators.
Problems associated with the quality of the GDP data might provide a possible explanation for these types of errors. Some studies have shown that the GDP data are of lower quality during recessions and the process by which these data are generated might have contributed to our failures to predict recessions in advance. Culbertson and Sinclair (2014) provide an explanation. The advance estimates released a month after the quarter to which they refer are not entirely based on actual observations; rather a large portion come from trend based data or a combination of actual observations and these trend based data. Thus models are used to generate data which are then inserted into forecasting models. “.. so there is a feedback loop of poor performance, particularly at economic turning points.” Culbertson and Sinclair, 2014,).
But even with these data problems, based on what I observed in the leading and coincident indicators, I predicted the Great Recession in October 2007; I even indicated that it would be worse than the previous other post war recessions because consumers would no longer be able to use their housing equity as a piggy bank to finance their purchases. To be honest, I did not foresee the depth of the decline nor the financial turmoil. I should note that I have predicted every recession that has occurred since 1957, (but unfortunately also some that did not materialize). I ask: How can people looking at the same real-time data miss these turning points? First, they could have prior beliefs that there was absolutely no chance that a recession could occur. More likely, they take into account the reliability of the information and compare the costs of predicting a recession that does not occur with the cost of failing to predict one. If the costs of making the false prediction exceed the latter, the recession won’t be predicted. In order to corroborate this hypothesis, it is necessary to know the relative costs of their errors, but I am not aware of such information. Thus we need some case studies showing how particular individuals went about making forecasts.
Finally, perhaps we need to ask whether we are paying enough attention to data explicitly designed to provide information either about the current state of the economy or intended to forecast turning points. Qualitative information such as that contained in the Fed’s Beige Book and Google Trends can provide valuable insights about what is happening in real-time. Moreover, we know that most of the indicators that are classified as “leading” do in fact decline before the economy does. Rate of change methods such as first differences and diffusion indexes have similar characteristics. All of these methods do, however, predict a number of declines that never occur. However, the use of these methods changes the nature of the forecasting problem. Now when we observe a “signal” from these indicators, we must explicitly focus on the possibility that a recession will occur. We must then only decide whether it is a true or false signal. Thus I suggest that the profession undertake research to determine appropriate rules for identifying true “signals” of a recession.