Winner of the New Statesman SPERI Prize in Political Economy 2016


Showing posts with label heterogeneity. Show all posts
Showing posts with label heterogeneity. Show all posts

Saturday, 3 February 2018

Large models, small models and Brexit

Non-economists with no interest in modelling techniques can skip to paragraph starting 'How is this all related to Brexit'.

I promised to look at some of the other papers in the OxREP volume “Rebuilding macroeconomic theory” besides my own, but as usual other things - including Brexit - got in the way. In this post I want to talk about the paper by Haldane and Turrell, which is about Agent Based Models, or ABMs. Right at the end of this post, however, I will come back to Brexit.

As a result of the microfoundations hegemony, any paper talking about a different modelling strategy often feels it must start by describing some drawbacks of that hegemony, and this paper is no exception. I might talk about that some other time, but instead I want to recommend what I think is one of the most realistic discussions of what ABM can or cannot do I have read.

As you might guess from the name, ABMs model the economy as a collection of a large number of different agents, each of which behaves in a specified way. The authors generalise the idea of a choice between internal and external consistency that I talk about in my paper to also include a degree of heterogeneity.


As you can see, ABMs are all about allowing as much heterogeneity as you wish. This is not to say that other methods cannot do heterogeneity (they can), but ABMs major in this dimension, and in practice often keep the behaviour of agents relatively simple compared to a DSGE. (A slight quibble: I would argue that as DSGEs are internally consistent by definition, the orange square representing them should be a slimmer and perhaps taller rectangle.) ABMs (within the bounds of tractability) owe no allegiance to any school of thought: the paper has a nice table of the many different types of consumption function used in a range of ABM studies.

As the macroeconomy is indeed made up of many different types of agents who may be doing different things, and whose interaction may produce unexpected results, it seems like ABMs can only be a good thing. But this additional freedom brings a large cost. Because, and unlike some hard sciences, there is a large amount of uncertainty about how people actually behave, we cannot treat any model as a black box, the output from which has to be accepted without question. No civil servant or central bank economist can go to politicians or governors and simply say it is what the model said.

Exactly the same problem can arise with SEMs, simply because of their complexity or disaggregation. It could also arise from a complex DSGE. The first question any economist asks when seeing an output from any large and complex model is does the result make sense given the smaller theoretical models they carry around in their head. It is why I proposed for SEMs the process I called theoretical deconstruction, where model properties were either reduced to familiar results from simpler models, or show the limitations of those simpler models. Again, as the paper notes, a similar process needs to, and in some cases has, happened with results from ABMs.

How is this all related to Brexit? The results showing how different degrees of Brexit would do the economy damage to different extents that I talked about in my last post were produced by trade theory’s equivalent of ABMs, called computable general equilibrium (CGE) models. These allow for considerable heterogeneity (across sectors and countries) in modelling trade. As Chris Giles recounts in this excellent piece, the model is more complex than anything the Treasury had before Brexit, and was built specifically to help with Brexit.

As Chris writes
“It must have come as a bit of a shock to government economists that the moment some results of this new model were leaked this week, ministers rushed to deny the usefulness of the tools they commissioned. Such models are “always wrong”, declared Steve Baker, a junior Brexit minister, on Tuesday.”

As I note in a postscript to my last post, he went further on Thursday to suggest that civil servants had deliberately cooked the model to sabotage Brexit.

How do we know that this didn’t happen, apart from the implausibility that so many civil servants could concoct such a conspiracy. Precisely because in this case the results from a highly disaggregated model broadly agrees with most other studies, and also common sense: the more difficult you make trade, the less there will be and the more costly that will be for UK output. Chris ends with some words that should be sent to every journalist in the country.
“Ministers now have a choice. They can opt for an honest Brexit in which they argue in public that people should pay an economic price for their policies. Or they can opt for a dishonest Brexit, pretending they have a secret plan for economic nirvana and trashing their own internal economic evidence. Ministers’ initial reaction in disowning the analysis suggests deception is the government’s central Brexit strategy. People talk about a crisis in economics. After this episode, it is the crisis in politics that should really concern us.”






Wednesday, 18 July 2012

Modelling what you can see


                This is a follow on to this post, and an earlier post by Paul Krugman. I’m currently reading an excellent account by Jonathan Heathcote et al of “Quantitative Macroeconomics with Heterogeneous Households”. This is the growing branch of mainstream macro that uses today’s computer power to examine the behaviour of systems with considerable diversity, as opposed to a single (or small number of) representative agent(s). (Heterodox economists may also be interested!) I want to talk about the methodological implications of this kind of analysis at some future date, but for now I want to take from it another example of letting theory define reality.
                If you have an environment where a distribution of agents differ in the income (productivity) shocks they receive, a key question is how complete markets are. If markets are complete, agents can effectively insure themselves against these risks, and so aggregate behaviour can become independent of distribution. This is a standard microfoundations device in models where you want to examine diversity in one area, like price setting, but want to avoid it spilling over into other areas, like consumption. (As the paper notes, the representative agent that emerges may not look like any of the individual agents, which is one of the points I want to explore later.)
                Real world markets are not complete in this sense. We know some of the reasons for this, but not all. So the paper gives two different modelling strategies, which it describes in a rather nice way. The first strategy – which the paper mainly focuses on - is to ‘model what you can see’:

“to simply model the markets, institutions, and arrangements that are observed in actual economies.” 

The paper describes the main drawback of this approach as not being able to explain why this incompleteness occurs. The second approach is to ‘model what you can microfound’:

“that the scope for risk sharing should be derived endogenously, subject to the deep frictions that prevent full insurance.”

The advantage of this second approach is that it reduces the chances of Lucas critique type mistakes, where policy actions change the extent of private insurance. The disadvantage is that these models “often imply substantial state-contingent transfers between agents for which there is no obvious empirical counterpart”. In simpler English, they predict much more insurance than actually exists.
                The first approach is what I have described in a paper as the ‘microfoundations pragmatist’ position: be prepared to make some ‘ad hoc’ assumptions to match reality within the context of an otherwise microfounded model. I also talk about this here. The second approach is what I have called the ‘microfoundations purist’ position. Any departure from complete microfoundations risks internal inconsistency, which leads to errors like (but not limited to) the kind Lucas described.
                As an intellectual exercise, the ‘model what you can microfound’ approach can be informative. Hopefully it is also a stepping stone on the way to being able to explain what you see. However to argue that it is the only ‘proper’ way to do academic macroeconomics seems absurd. One of the key arguments of my paper was that this ‘purist’ position only appeared tenable because of modelling tricks (like Calvo contracts) that appeared to preserve internal consistency, but where in fact this consistency could not be established formally.
                If you think that only ‘modelling what you can microfound’ is so obviously wrong that it cannot possibly be defended, you obviously have never had a referee’s report which rejected your paper because one of your modelling choices had ‘no clear microfoundations’. One of the most depressing conversations I have is with bright young macroeconomists who say they would love to explore some interesting real world phenomenon, but will not do so because its microfoundations are unclear. We need to convince more macroeconomists that modelling choices can be based on what you can see, and not just on what you can microfound.