Tony Yates has written a long discussion of microfounded macro, which I think reflects the views of many academic macroeconomists. I agree with him that microfoundations have done a great deal to advance macroeconomics. It is a progressive research program, and a natural way for macroeconomic theory to develop. That is why I work with DSGE models. Yet in one respect I think he is mistaken, which is in his presumption that microfounded models are naturally superior as a tool for policy actions.
Let me try and paraphrase his discussion. There are basically two ways to model the economy. The first is using microfounded models, which when done right avoid the Lucas critique. The second is to do something like a VAR, which lets the data speak, although doing policy with a VAR is problematic.
So what about aggregate models that use a bit of theory and a bit of econometrics? Let me quote.
“A final possibility is that there is no alternative but to proceed in non-micro-founded way. Yet some business has to be done – some policy decision, or some investment based on a forecast. In these circumstances, it’s ok to take a stab at the decision rules or laws of motions for aggregates in an economy might look like if you could micro-found what you are concerned with, and move on. Perhaps doing so will shed light on how to do it properly. Or at least give you some insight into how to set policy. Actually many so called microfounded models probably only have this status; guesses at what something would look like if only you could do it properly.”
As the language makes clear, we are talking at least second best here. Tony would not go so far as to outlaw non-microfounded models, but any such models are clearly inferior to doing “it properly”.
Yet what is the basis of this claim? A model should be as good a representation of the economy as possible for the task in hand. The modeller has two sources of information to help them: micro theory about how individuals may behave, and statistical evidence about how the aggregate economy has behaved in the past. Ideally we would want to exhaust both sets of information in building our model, but our modelling abilities are just not good enough. There is a lot in the data that we cannot explain using micro theory.
Given this, we have three alternatives. We can focus on microfoundations. We can focus on the data. Or we can do something in between - let me call this the eclectic approach. We can have an aggregate model where equation specification owes something to theory, but also attempts to track more of the data than any microfounded model would try to do. I can see absolutely no reason why taking this eclectic approach should produce a worse representation of the economy than the other two, whether your business is policy or forecasting.
Let’s take a very basic example. Suppose in the real world some consumers are credit constrained, while others are infinitely lived intertemporal optimisers. A microfoundation modeller assumes that all consumers are the latter. An eclectic modeller, on finding that consumption shows excess sensitivity to changes in current income, adds a term in current income into their aggregate consumption function, which otherwise follows the microfoundations specification. Which specification will perform better? We cannot know for sure, but I see circumstances in which the ‘ad hoc’ eclectic specification would do better than the misspecified microfounded model. (I give a more sophisticated example related to Friedman’s PIH and precautionary saving here.)
Now a microfoundation modeller might respond that the right thing to do in these circumstances is to microfound these credit constrained consumers. But that just misses the point. We are not talking about research programmes, but particular models at particular points in time. At any particular time, even the best available microfounded model will be misspecified, and an eclectic approach that uses information provided by the data alongside some theory may pick up these misspecifications, and therefore do better.
Another response might be that we know for sure that the eclectic model will be wrong, because (for example) it will fail the Lucas critique. More generally, it will not be internally consistent. But we also know that the microfounded model will be wrong, because it will not have the right microfoundations. The eclectic model may be subject to the Lucas critique, but it may also - by taking more account of the data than the microfounded model - avoid some of the specification errors of the microfounded model. There is no way of knowing which errors matter more.
It’s easy to see why eclectic models have a hard time. Because they look at both theory and data, they will never satisfy both theorists and econometricians. But that does not make them necessarily inferior to either microfounded models or VARs. We can speculate on reasons why, on at least some occasions, eclectic models may do better. But the key point I want to make here is that I do not know of any epistemological reasons for thinking eclectic models must be inferior to microfounded models, yet many macroeconomists seem to believe that they are.