Paying attention to inattention

Olivier Coibion, Yuriy Gorodnichenko, 24 August 2012

a

A

“It should be clear that among the causes of the recent financial crisis was an unjustified faith in rational expectations …”
Paul Volcker, 27 October 2011

Macroeconomics has taken a public flogging since the onset of the financial crisis, both from those outside the profession (such as the Oscar-winning documentary Inside Job) as well as from some insiders (such as Krugman’s Dark Age of Macroeconomics). One prevalent criticism is the assumption of full-information rational expectations (FIRE) under which economic agents know the structure and parameters of the economic model, observe all shocks and variables in real time, and form identical expectations. And while the FIRE assumption is indeed common in macroeconomic models, macroeconomists have long been exploring departures from full information. Lucas (1972) and Kydland and Prescott (1982) are early examples of models in which agents face imperfect information, and both Tom Sargent and Chris Sims – the two most recent Nobel recipients in economics – have done groundbreaking work along these lines.

Attention to models of inattention has been particularly high over the last decade since the pioneering work of Mankiw and Reis (2002), Sims (2003) and Woodford (2001). Each propose models that embed face information rigidities – frictions which lead rational agents to have only imperfect information about economic conditions – thereby departing from the full-information component of FIRE. The recent survey of this literature by Mankiw and Reis (2011) documents that models of imperfect information can address a number of puzzles in macroeconomics, international economics and finance.

At the same time, empirical evidence on the nature of the expectations formation process has been limited. While there is a long literature testing the FIRE assumption, it has proven difficult to quantify the economic significance (for example, does a departure make a difference for the macroeconomy?) and to interpret the nature of the rejections (for example, is it irrationality or imperfect information?). In recent work (Coibion and Gorodnichenko 2011, 2012), we propose and apply new empirical tests – derived directly from theoretical models of imperfect information – that shed new light on the expectations formation process of economic agents by addressing both limitations of traditional tests.

Predictions of models of information rigidities

We focus on two particular models of information rigidities. The first, following Mankiw and Reis (2002), are models in which agents face fixed costs to acquiring and processing information and therefore update their information sets infrequently ('sticky information' models). The second, following Woodford (2001) and Sims (2003), are models in which agents are continuously updating their information sets but they receive noisy information about underlying economic conditions (“noisy-information” models). These agents solve signal-extraction problems and, in light of their heterogeneous signals, hold different beliefs about economic conditions.

While the source of heterogeneity in beliefs across agents differs in the two models, we show that they make several common predictions. First, both models imply that the average forecast across agents will under-react to economic shocks relative to the FIRE benchmark. This conditional prediction obtains in both the sticky-information model (because some agents will not adjust their forecasts) and in the noisy-information model (because agents will partially adjust their forecasts to new signals since these signals are generally noisy). Second, both models imply that average forecast errors across agents will be predictable using the average ex-ante forecast revision across agents. This unconditional prediction is an emergent property of both models: the forecast errors made by individuals are unpredictable on average given their forecast revisions. The predictability of the average forecast error instead stems from the aggregation across agents. In contrast, FIRE requires average ex-post forecast errors to be both conditionally and unconditionally unpredictable given the information available to the agents when the forecasts were made. Thus, these predictions allow us to test the null of FIRE against the alternative predictions made by models of information rigidities.

The conditional predictability of average forecast errors

In Coibion and Gorodnichenko (2012), we assess the speed at which forecasts respond to shocks by estimating the response of the average inflation forecast error across agents to economic shocks using survey data for professional forecasters, consumers, firms and FOMC members. Figure 1 below shows these responses after a disinflationary technology shock. Under FIRE, the response of forecast errors should be zero whereas models of information rigidities predict negative forecast errors (since the average forecast should decline by less than ex-post inflation). The latter is exactly what we find: we can strongly reject the null of FIRE for all agents and do so precisely in the direction suggested by models of information rigidities.

Figure 1. The response of average forecast errors to a disinflationary technology shock

Note: The figure plots the impulse responses of year-ahead inflation forecast errors to disinflationary technology shocks. The top two panels are quarterly, while the bottom two panels are semiannual. See Coibion and Gorodnichenko (2012) for details.

The predictability of average forecast errors using forecast revisions

In Coibion and Gorodnichenko (2011), we assess whether average forecast errors across agents are predictable using average ex-ante forecast revisions. Across a variety of economic agents, variables and countries, we find systematic predictability in average ex-post forecast errors using ex-ante forecast revisions. Additional testable restrictions implied by both models of information rigidities (such as zero constants or coefficients on contemporaneous and lagged forecasts summing to zero) cannot be rejected. Because imperfect information models point to a direct theoretical mapping between the coefficients on forecast revisions and the underlying degree of information rigidity, we can assess the economic significance of the rejections of FIRE. For example, in noisy-information models, the estimates imply that agents placing more weight on their prior beliefs than on new information, pointing to levels of information rigidity which should have pronounced macroeconomic implications. Thus, unlike traditional tests of FIRE, our approach provides information not just about FIRE but also about the nature and economic significance of rejections of full-information rational expectations.

Historical variation in information rigidity

In Coibion and Gorodnichenko (2011), we also show that our empirical specification can be used to characterise how the degree of information rigidity varies in response to economic conditions. For example, in both sticky and noisy-information models, high levels of economic volatility should induce agents to devote more resources and effort to the collection and processing of information. We consider two sources of variation in macroeconomic volatility to assess the extent to which the degree of information rigidity responded endogenously to this variation.

First, there was a dramatic decline in macroeconomic volatility in the early to mid-1980s, commonly referred to as the Great Moderation, relative to the high levels of the 1970s. This lower aggregate volatility should have induced agents to devote less attention to macroeconomic conditions. Figure 2 below plots the low-frequency variation in our estimated levels of information rigidity relative to the rolling standard deviation of real GDP growth, a simple measure of aggregate volatility. Consistent with models of information rigidities, the degree of information rigidity fell during the 1970s when volatility was high, reached a minimum around 1983-4 at the same time as the onset of the Great Moderation, then climbed gradually over the course of the Great Moderation.

Figure 2. The degree of information rigidity and the Great Moderation

Notes: The figure plots the time series of two variables. The first is the standard deviation of the US real GDP growth rate (annualised) over a five-year moving window (red dash line; right axis). The second is the smoothed coefficient βt on forecast revisions in Coibion and Gorodnichenko (2011) estimated for each quarter separately on the Survey of Professional Forecasters data (black thick solid line; left axis). The shaded region is the 95% confidence interval. The smoother is a local average that uses Epanechnikov kernel with bandwidth equal to five.

Second, volatility also varies over the course of the business cycle, with recessions being periods of significantly higher volatility. Thus, the degree of information rigidity should fall during recessions and rise in subsequent expansions. Figure 3 below plots the response of the degree of information rigidity to an average US recession: the degree of information rigidity declines significantly over the course of recessions and rises as the economy gradually recovers.

Figure 3: The degree of information rigidity and US recessions

Notes: The figure plots the response of the coefficient βt on forecast revisions in Coibion and Gorodnichenko (2011) estimated for each quarter separately on the Survey of Professional Forecasters data. The response is normalised to be at the average value of the coefficient βt one period before a recession starts. The shaded region is the 95% confidence interval. The horizontal, thin, dashed line shows the average value of the coefficient βt. The vertical, thin, dashed line shows the time when economy moves into a recession.

Conclusion

Our results point to economically large information rigidities for a variety of economic agents: professional forecasters, consumers, firms, financial institutions, and central bankers. Systematically integrating information rigidities on the part of these different economic agents into macroeconomic models should therefore be a promising area for future research. At the same time, the fact that the degree of information rigidity appears to respond not to just to low-frequency variation in economic volatility but even over the business cycle suggests that future work should delve into state-dependence in the expectations formation process, as in Gorodnichenko (2008). But more broadly, our results strongly support the burgeoning literature on imperfect information models in which rational agents find it optimal to be inattentive.

These models may shed light on a number of additional issues.

  • First, to the extent that information rigidities amplify the effects of economic shocks, the persistent increase in information rigidity during the Great Moderation suggests a new mechanism through which, along with increased risk-taking behaviour by financial market participants, the Great Moderation may have played a role in generating the Great Recession.
  • Second, we document in Coibion and Gorodnichenko (2011) that there was a synchronised updating of expectations immediately after the 9/11 attacks. Persistent inattention – during which agents track economic fundamentals poorly – followed by synchronised updating after a highly-visible shock could yield a formal representation of Minsky moments.
  • Finally, the presence of imperfect information can, like nominal rigidities, justify monetary and fiscal stabilisation policies. But the notion of information rigidities can also apply to policymakers and therefore potentially help account for periods of lax regulatory and policy supervision.

In short, continued work on imperfect information models may shed light on some of the most defining characteristics of the period preceding the Great Recession.

References

Coibion, O and Y Gorodnichenko (2011), “Information Rigidity and the Expectations Formation Process: A Simple Framework and New Facts”, NBER Working Paper 16537.

Coibion, O, and Y Gorodnichenko (2012), “What can survey forecasts tell us about informational rigidities?”, Journal of Political Economy, 120(1):116-159.

Gorodnichenko, Y (2008), “Endogenous Information, Menu Costs, and Inflation Persistence”, NBER Working Paper 14184.

Kydland, F and E Prescott (1982), “Time to Build and Aggregate Fluctuations”, Econometrica, 50(6):1345-1370.

Lucas, RE (1972), “Expectations and the Neutrality of Money”, Journal of Economic Theory, 4(2):103-124.

Mankiw, NG, and R Reis (2002), “Sticky Information Versus Sticky Prices: A Proposal to Replace the New Keynesian Phillips Curve”, Quarterly Journal of Economics 117(4):1295-1328.

Mankiw, NG, and R Reis (2010), “Imperfect Information and Aggregate Supply”, in Benjamin M Friedman and Michael Woodford (ed.), Handbook of Monetary Economics, 3(5):183-229, Elsevier.

Sims, CA (2003), “Implications of Rational Inattention”, Journal of Monetary Economics, 50(3):665-690.

Woodford, M (2001), “Imperfect Common Knowledge and the Effects of Monetary Policy”, published in P Aghion et al. (eds.), Knowledge, Information, and Expectations in Modern Macroeconomics: In Honor of Edmund S Phelps, Princeton University Press, 2002.

 

Topics: Frontiers of economic research, Macroeconomic policy
Tags: economic forecasting, economic modelling, imperfect information

Assistant Professor, UT Austin

Associate Professor in the Dept. of Economics, University of California, Berkeley

Subscribe