A post at Freakonomics suggests macroeconomics is in more trouble than microeconomics because there is less room for empirical work because there is less data:
In microeconomics, at least there is an abundance of good data, so people who are good at measuring and describing things can succeed. But in macro there is not much data, so most of the rewards are for the mathematics, not the empirics.
As a micro-theorist and hence an outsider, it seems to me that Hari Seldon and other psychohistorians are wrong: events involving large numbers of people and firms are harder to predict than those involving a few. In micro for example, is much harder to understand imperfect competition than it is to understand say monopoly price discrimination. Even if competition is perfect, we know from general equilibrium theory that there is still a multiple equilibrium problem that makes it hard to predict economic trends. And if we allow monopolies so we can predict trends more easily, here is my main prediction: prices will go up, output will go down, the stock market will go up and consumers will be worse off.
1 comment
Comments feed for this article
June 4, 2009 at 11:34 am
Sean
Actually, one could make the argument that macroeconomics has not been sufficiently mathematical, at least in certain dimensions. For example, macro has generally not taken the frictions generated by market incompleteness seriously. John Geanakoplos has developed a compelling model of leverage cycles where market incompleteness can play a huge roll. If you were optimistic about the housing market you could make a highly leveraged bet, but if you were a pessimist you weren’t able to easily do so until CDSs became readily available. Once the market had a greater degree of completeness, prices moved to a level that reflected views of everyone in the economy with closer to equal weight, not just the views of the optimists.
I also think macro would be better served by studying paths within a basin of attraction rather than a sequence of fixed points. Smith and Plott have taught us an awful lot about prices in super stationary exchange economies (let the thing converge to the Pareto set, run it again from the same initial endowment until it converges, repeat many times). Competitive equilibrium is a pretty lousy predictor of prices and allocations in the first iteration, and it usually becomes a better and better predictor across iterations (economies with strong income effects notwithstanding). Basically, final allocations in early iterations are distributed with support across a big chunk of the contract set, and that support gets squeezed down to the competitive equilibrium across iterations. Naturally occuring economies are not nearly so super-stationary, so it seems intuitive to characterize the distribution of paths the economy could take rather than the precise path/paths.
I think the problem with macro is less one of being “too mathy” and insufficiently empirical, but rather one of being too narrowly focused on existing models and insufficiently creative in developing new ones.