Saturday, November 7, 2015

Expecting more precision than possible


Diane Coyle has taken her crusade against GDP (as a measure) to the NYTimes OpEd page. I don't have much issue with it in general, however she claims:
Growth forecasts for gross domestic product in the United States at the end of this year vary from about 1.75 percent to 3 percent — a good measure of the lack of consensus.

Does this reflect a lack of consensus? I did some simple linear extrapolations in the IT model and looked at the error in the estimate of 2015 Q4 RGDP (i.e. propagating error in the prediction of PCE inflation and NGDP to RGDP) to get a handle on how big a 125 basis point variation stacks up. Here are the extrapolations:



That's 1-σ (standard deviation), so we'd expect it to be outside that range more than 30% of the time. What we end up with is an estimate between 0.7% and 4.9% with roughly 70% confidence -- a 420 basis point error band.

Even casual observation of the raw data tells us the 125 basis point spread (black rectangle on the second plot) is remarkably tight. My 420 point band would be more indicative of a lack of consensus!

The truth is that precision to less than a percentage point may not even be possible. It seems Diane Coyle is setting up economic forecasts to fail.


If a model result is silly, question the scope before questioning the model

Awhile ago I wrote a bit about scope conditions (or rather the lack of them) in economics. Nick Rowe provides us with a good illustration of this problem with his post on the effects of a delay in ending fiscal policy in New Keynesian models. After reading his post, I commented:
So wait: expectations of a delay in returning taxes to normal produce a change in expectations from an expected temporary change in tax rates to an expected permanent change in tax rates? 
Can we add in rational expectations of an expected delay (obviously the government doesn't stop on a dime), so that people don't revise their expectations of a permanent/temporary tax change based on a delay in returning taxes to normal?
I was being a bit tongue in cheek; the result that an infinitesimal delay in the end of fiscal stimulus changing people's expectations of fiscal policy from temporary to permanent (and thus changing the sign and magnitude of the multiplier) is, in a word, silly. It's a bit like a child thinking that because class didn't let out at exactly 3pm, class will instead last (literally) forever.

You could cure this by saying that the theory is valid for government delays dt << 1 so that t + 1 ≈ t +dt  + 1. But then, that's the kind of thing that should have been a scope condition in the theory in the first place.

Nick says:
How long is "one period"? It's as short as you want it to be.
No; that is false. For one thing it can't be shorter than the Planck time. In general, it can't be shorter than the time it takes for light to traverse the entire country in question and return to that point (about 20 milliseconds for the US). It takes a few milliseconds for sensory input to register in our brains.

Less sarcastically, there is a definite period of time over which rational expectations can reasonably take hold. Quarterly NGDP data from the BEA isn't released until a month after the quarter ends. Budgets generally have annual cycles. It takes months (weeks, on rare occasions) to get bills through the US congress. There obviously exist timescales over which macroeconomic and government processes happen.

The New Keynesian theory should have something to say about what "one period" means (it seems to be quarterly from various models I've seen). It should also have some kind of estimate about the relative size of the timescale of government actions (dt) and the reaction of the macroeconomy (dT). Is dt << dT? In that case Nick's analysis is silly, not the result. Nick implicitly assumes dt >> dT (the macroeconomy reacts faster than the government). Maybe that is true (data would help), but it is obviously not a region of validity of the New Keynesian model -- we know this because you get silly results like sudden shifts between fiscal policy being contractionary or expansionary based on a one month delay in changing marginal tax rates.

It's a bit like saying electrodynamics predicts atoms will radiate all their energy and collapse so atoms must not exist. But electrodynamics is not valid for cases where ћ ~ dx dp; you need quantum mechanics. That is to say you need to understand the scope conditions of your theory. If you've found a problem, the problem could well be that you've applied the theory incorrectly.

Funny enough, this is very similar to the mathiness issue between Paul Romer and Robert Lucas. The issue has the same form: Lucas's model is assumed to have infinite scope for its variables and Romer says Lucas's model has different limits if 1/β >> T and 1/β << T where 1/β is the timescale for innovation and T is the observation time of the economy. Those two limits have different model interpretations: innovation is slow (so it never happens) versus innovation is fast (so it has already happened). These are entirely different kinds of worlds.

But no one in economics seems to care about scope [1]. Nick Rowe is just fine with the idea that the New Keynesian model is valid for both extremely fast and extremely slow changes in government fiscal policy. He says the extremely slow version gives us silly results so we shouldn't trust the extremely fast version either. 

But you can't extrapolate from one limit to the other. The more logical conclusion is that the extremely slow version is out of scope of the theory.

Footnotes:

[1] Don't just take my word for it; Noah Smith says:
I have not seen economists spend much time thinking about domains of applicability (what physicists usually call "scope conditions"). But it's an important topic to think about.

Friday, November 6, 2015

The value premium and non-ideal information transfer

I always get nervous when everything I see appears to be confirmation of some pet theory I have. While this is in fact a property of a theory that is correct, it's also a property of confirmation bias and delusion. Of course, being able to ask the question is a sign that you're not delusional. Or is that just confirmation bias ...

Anyway, Noah Smith has an article that talks about an effect that appears to be confirmation of the information transfer model (ITM). It's called the value premium, and I'm it's disappearing.

I used the ITM to build a toy model [1] of stock prices in terms of book value to try to understand the so-called dark matter problem. However, if you consider non-ideal information transfer, prices should fall below their ideal price [2] (because the solutions to the differential equation act as a bound via Gronwall's inequality). Here's the picture from [2] (P is price, S is supply of "book widgets", called B in [1], D is demand, called M in [1] for market capitalization):


There would be stocks where the realized price (green line) was less than the ideal price (the black line bound). The so-called endowment effect would lead to more ideal information transfer over time if there are more and more trades -- assuming there wasn't some kind of non-ideal behavior leading to a fall in price.

A non-ideal price would be seen as a "value premium", and it would tend to vanish as the market for those stocks became more ideal.

Thursday, November 5, 2015

Monkeys and markets

So I found out my Dad reads the blog, and he sent me a link to an article from about 10 years ago about observing capuchin monkeys engaging in market behavior set up by experimenters. Previously I noted that E. coli also exhibits something like supply and demand.

I showed you could get this behavior from random agents, based primarily on the properties of the available state space. It turns out my contribution wasn't original as Gary Becker had also worked this out in the 1960s (and economists responded negatively to giving up the idea of utility maximization).

The implicit theorizing in the article is great:
It took several months of rudimentary repetition to teach the monkeys that these tokens were valuable as a means of exchange for a treat and would be similarly valuable the next day. Having gained that understanding, a capuchin would then be presented with 12 tokens on a tray and have to decide how many to surrender for, say, Jell-O cubes versus grapes. This first step allowed each capuchin to reveal its preferences and to grasp the concept of budgeting. 
Then Chen introduced price shocks and wealth shocks. If, for instance, the price of Jell-O fell (two cubes instead of one per token), would the capuchin buy more Jell-O and fewer grapes? The capuchins responded rationally to tests like this -- that is, they responded the way most readers of The Times would respond. In economist-speak, the capuchins adhered to the rules of utility maximization and price theory: when the price of something falls, people tend to buy more of it.
Revealed preferences, budget constraint, price shocks, rational responses, utility maximization: the whole framework is there. Even intertemporal expectations ("similarly valuable the next day"). Note that the axiom of revealed preference is equivalent to assuming real-valued utility functions (ensuring transitivity creates a total order which means that whatever it is, it can be represented with real numbers if it's not pathological).

Of course, you can get these results just from the properties of the state space. Start with 12 tokens and look at the available state space for Jell-O (quantity Q₁) with a price of 0.8 token, 1 token and 2 tokens. If the monkeys chose a budget completely at random in the available state space, they'd on average choose the point (colored points) at the center of the area bounded by the budget constraint (colored lines):


If we change the price of Jell-O from 2 to 1 and then 0.8 token, we move (on average) from the red point to the blue point to the green point. Plotting these out in price-quantity space, we get a downward sloping demand curve:


Apparent rational behavior from averaging random choices.

Note that if you add dimensions, i.e. different goods (grapes, Jell-O, crackers, chocolate, beer, ...), the centroids approach the budget constraint lines (most of the volume of a high dimensional space is near its surface), and the random choice approximates utility maximization if you consider utility functions that aren't too asymmetric.

Wednesday, November 4, 2015

Are we no longer safe from a recession?

I had a speculative indicator of recessions based on interest rates being high or low relative to the IT model trend based on the NGDP-M0 path all described at this post. I mentioned that I would check it out when I got a chance given that there seemed to be potential for really low inflation over the next 6 months as I talked about here.

Well, it turns out for the first time since 2008, the effective Fed funds rate (the short interest rate) is above the IT model trend:


And here's the model (not the trend extracted from the NGDP-M0 path, shown at the bottom of this post) of the 3-month rate (as it usually appears on this blog):


Here's the NGDP-M0 trend -- but it doesn't seem to be above trend (so it may only be a small avalanche)


Partitions of an economy

I am working towards quantifying something I've said several times (e.g. here or here or here) -- that the trend towards lower growth as economies grow is a result of there being more ways an economy can be composed of many low growth industries than a few high growth industries, the former being a higher entropy state. This was an analogy with thermodynamics: there are more ways to emit several low energy photons than a few high energy ones, hence hot things are red, not blue (unless you get really hot ... )

Essentially, I am trying to derive the "economic temperature" ~ log M in a more rigorous way. See the section of the paper on "statistical economics" for a starting point.

Let's say nominal ouptut (NGDP) is given by

N(t) = N0 exp ρ t

with growth rate ρ. Let's say that output consists of several industries (or firms), each with their own growth rate rᵢ so that

N(t) = A₁ exp r₁ t + A₂ exp r₂ t + ...

If the growth rates are small and we keep ourselves focused on the short run, then we can say ρ t << 1 and rᵢ t << 1 so that

N0 ~ A₁ + A₂ + ...
ρ ~ (1/N0) (A₁ r₁ + A₂ r₂ + ... )

Let's say N0 is an integer, but very large. Then the Aᵢ represent a partition of N0. For example, here's a partition of N0 = 1000:



These partitions have a smooth distribution for N0 >> 1 (the red line, the Vershik-Kerov-Logan-Shepp limit shape). Note, I mentioned Ferrers diagrams at the very beginning of this blog. Here are several partitions together:



The red curve represents the most likely paritition of an economy of nominal output N0 into various industries -- so we have the distribution of Aᵢ. The overall growth rate is then proportional to the dot product Σ Aᵢ rᵢ ... note that Aᵢ rᵢ is itself another partition, this time of N0 ρ.

So if Aᵢ has the VKLS distribution and Aᵢ rᵢ also has the VKLS distribution, we should be able to work out the distribution of rᵢ (under the assumption that rᵢ and Aᵢ are independent, Gibrat's law) ...

To be continued ...

...

Update 11/5/2015:

The last statements aren't really true because of the possibility for negative growth rates rᵢ. The terms Aᵢ rᵢ are not a partition of N0 ρ.

Statistical mechanics of ants

I like to say that the IT model looks at human behavior as so complex that it looks like randomness at the micro level. Basically, there is so much going on that statistical mechanics kicks in (most of the time [1]) and the macro behavior is well-described by a gas or fluid.

In this video (article here) from Georgia Tech, the behavior of individual ants is very complicated, but at the macro level the ants behave as a fluid (or more accurately a state that has properties of solids and fluids):


The decision process in each tiny (but still too complex to model with a computer!) ant brain of what force level to let go and "play dead", to stick together or not, is aggregated into a single number: a viscosity coefficient.

Footnotes:

[1] Recessions and other mass coordination events are the exception, but most of the time, the economy is not in a recession.

Tuesday, November 3, 2015

CPI inflation predictions and unwarranted speculation

I previously noted that the CPI model seems to work better with a lag of about y0 = 1.2 years between the price level function P(N(t - y0), M(t - y0)) and the data. Here was the graph:

This resulted in predicted values of the CPI out about a year based on M (monetary base minus reserves) and N (NGDP) from today. Since the core CPI data is going to come out about mid-November 2015 (about two weeks) for October 2015, I thought I'd reiterate this prediction. It is notable for predicting a fall in core CPI inflation from December through May of next year, followed by a rise in the summer of next year.

Sometimes tables are easier to read than graphs, so here's the table of predictions from the lag model:

Core CPI inflation [%]

2015 Oct = 1.6 ± 1.7           Actual: 2.4    (Δ = + 0.8)   (12/2/2015)
2015 Nov = 1.6 ± 1.7          Actual: 2.2    (Δ = + 0.6)   (2/19/2016)
2015 Dec = 0.8 ± 1.7          Actual: 1.9    (Δ = + 1.1)   (2/19/2016)
2016 Jan = 0.8 ± 1.7           Actual: 3.5    (Δ = + 2.7)   (3/17/2016) still whoa!
2016 Feb = 0.8 ± 1.7           Actual: 3.4    (Δ = + 2.6)   (3/17/2016) whoa!
2016 Mar = 0.5 ± 1.7          Actual: 0.8    (Δ = + 0.3)   (4/14/2016) [spike looks transient]
2016 Apr = 0.5 ± 1.7           Actual: 2.3    (Δ = + 1.8)   (5/21/2016)
2016 May = 0.5 ± 1.7          Actual: 2.4    (Δ = + 1.9)   (6/16/2016)
2016 Jun = 1.7 ± 1.7           Actual: 2.0    (Δ = + 0.3)   (7/15/2016)
2016 Jul = 1.7 ± 1.7            Actual: 1.1    (Δ = – 0.6)   (8/20/2016)
2016 Aug = 1.6 ± 1.7          Actual: 3.1    (Δ = + 1.5)   (11/03/2016)
2016 Sep = 0.9 ± 1.7          Actual: 1.3    (Δ = + 0.4)   (11/03/2016)
2016 Oct = 0.9 ± 1.7          Actual: 1.8    (Δ = + 0.9)   (11/30/2016)
2016 Nov = 0.9 ± 1.7         Actual: 1.8    (Δ = + 0.9)   (12/15/2016)

Note that the error is fairly large, but even so we should still see a fall in inflation in the winter/spring of 2016. Also note that CPI tends to run about 30 basis points higher, so here are the "implied" PCE inflation results:

"Implied" core PCE inflation [%]

2015 Oct = 1.3 ± 1.7          Actual: 0.7    (Δ = – 0.6)   (3/28/2016)
2015 Nov = 1.3 ± 1.7         Actual: 1.5    (Δ = + 0.2)   (8/20/2016)
2015 Dec = 0.5 ± 1.7         Actual: 0.8    (Δ =  + 0.3)   (8/20/2016)
2016 Jan = 0.5 ± 1.7          Actual: 3.3     (Δ = + 2.8)   (8/20/2016) still whoa!
2016 Feb = 0.5 ± 1.7          Actual: 2.3     (Δ = + 1.8)   (8/20/2016)
2016 Mar = 0.2 ± 1.7          Actual: 0.8     (Δ = + 0.6)   (8/20/2016)
2016 Apr = 0.2 ± 1.7          Actual: 2.4     (Δ = + 2.2)   (10/07/2016)
2016 May = 0.2 ± 1.7         Actual: 2.0     (Δ = + 1.8)   (10/07/2016)
2016 Jun = 1.4 ± 1.7          Actual: 1.0     (Δ = – 0.4)   (10/07/2016)
2016 Jul = 1.4 ± 1.7           Actual: 1.8     (Δ = + 0.4)   (11/30/2016)
2016 Aug = 1.3 ± 1.7         Actual: 2.3     (Δ = + 1.0)   (11/30/2016)
2016 Sep = 0.6 ± 1.7         Actual: 1.3     (Δ = + 0.7)   (1/18/2017)
2016 Oct = 0.6 ± 1.7         Actual: 1.5     (Δ = + 0.9)   (2/03/2017)
2016 Nov = 0.6 ± 1.7        Actual: 0.2     (Δ = – 0.4)   (2/03/2017)

If core PCE and core CPI continue to follow each other (with the 30 basis point adjustment), I am wondering if there will be lots of articles about seriously undershooting inflation in the winter/spring of 2016. Will this have any impact on the US elections? It is hard to tell since the public tends to think inflation = bad. Politics will ignore it and Matthew Yglesias will probably write stories about the "biggest issue not being discussed in the 2016 campaign" [update: here it is, update 2: here's an even better one!**]. With these (advisedly -- potentially) low numbers, will we get QE4? What will the Fed do?

The data don't show the previous downward dip at the beginning of 2015 (associated with the really bad 2014 Q1 NGDP growth number), but the forthcoming dip (associated with the bad 2014 Q4 and 2015 Q1 NGDP numbers but also low base growth) is more sustained, so is more likely to be realized in the data.

In fact, the last time the model showed core CPI inflation this low was just before the recession. That could mean the US NGDP data might show a recession in the next two quarters. However the NGDP-M0 path doesn't seem above trend so this could well be nothing. I will have a look at this indicator in an update to this post.

...

Update 1/20/2016

Updated the CPI graph with latest data (first gray line shows last data point available when prediction was made, second shows last available as of update -- thus data between lines is predicted):





Update 11/4/2015

Here is that update. And that indicator actually indicates a recession is possible -- interest rates (the effective Fed funds rate in this case) are above the trendline:


...

Update 15 December 2016

Here is the final graph for CPI. As you can see, there is a positive fluctuation in CPI inflation at the same time as a negative fluctuation in the model (in this case, due to a negative fluctuation in lagged NGDP). Overall, the model is biased low (biased error). It's not wholly unprecedented (see 2007-2008), but I'm not sure this model is any more useful that the regular inflation model.


...

Update 3 February 2017

Most of the data revisions for inflation are in, so we can officially close this out saying the lagged version is not any better than the regular inflation model. The result was biased low (pce is yellow, cpi is blue):


I will continue to monitor this model to see if it is just useful over the longer run.

...

** From the article (24 August 2016):
If it were me, though, I would ask [Hillary Clinton] about San Francisco Federal Reserve President John Williams’s recent declaration that the Federal Reserve needs a new approach to fighting recessions — either running a higher rate of inflation during non-recession periods, or else abandoning inflation targeting altogether in favor of what’s called NGDP level targeting
This is a topic that has attracted zero attention during the campaign, but whether the Fed adopts his ideas or not will directly touch the lives of every single American. And with two open seats on the Federal Reserve Board of Governors, the next president will have an immediate chance to have an impact on the subject.

Who should we listen to?

Scott Sumner asks the question, regarding macroeconomics: who should we listen to? He tries to suggest that we should assign a higher Bayesian prior probability to someone who has made several qualitative and ill-defined conditional predictions with a model only that person can use that are declared correct by the person who made them.

This is exactly who you shouldn't listen to.

You should assign a higher Bayesian prior to someone who makes quantitative predictions with well-defined conditions with a publicly available model that are declared correct by someone else.

What is particularly important is that other people can use the model. For example, the ITM equations and parameters are publicly available here:

http://econpapers.repec.org/RePEc:arx:papers:1510.02435

Anyone can use them to make predictions or even build their own models.

No one can use Scott Sumner's model except Sumner himself. Here's his paper on monetary offset [pdf]. The model appears to be an AD-AS model, however the specifics depend on whether expansionary monetary policy is "effective", and there is no model of monetary policy effectiveness other than the ability to move exchange rates (#3, below), stock markets (#7, below) or successfully implement contractionary monetary policy (#8, below). So there is no evidence of effectiveness in terms of nominal (or real) output, the default metric of macroeconomics. These market moves also have not been large enough to be distinguished from noise (see e.g. here).

An additional issue is that Sumner does not describe how to produce counterfactuals with regards to monetary policy. We have no idea what would have happened in Sweden, the EU (#8, below), the US (#1, #2, and #4 below), Japan (#3, below) if monetary policy had been different. We must basically ask Scott if things would have been different, and if so, how different?

Sumner does not specify what the counterfactuals are with regards to fiscal policy (#2, below) and does not exactly specify exactly what constitutes fiscal policy. In fact, because it wasn't exactly specified there was a question as to whether dividends from GSEs constituted contractionary fiscal policy. Whether or not it does, this could not have been answered without hearing from Sumner -- again, we have to rely on Sumner himself to understand his model.

Sumner describes his model in terms of the "hot potato effect" and the "musical chairs model", neither of which contain any reference to productivity. Yet the Sumner's model has something to say about productivity (#9, below). How do we mere mortals extract this effect?

But Scott Sumner thinks we should listen to him. Well, we have to in order to know what his model says! Here's his list of predictions:
1.  Those who, at the time, thought that Fed policy was too tight in late 2008 (something that Bernanke has now admitted).
What is "tight"? Who cares about a single opinion (Bernanke)? Do we know for a fact that tight monetary policy had any consequence in 2008? What is the counterfactual?
2.  Those who correctly predicted that the contractionary effects of the 2013 fiscal cliff would be offset by monetary policy.
This one does not count.

We don't know the counterfactual (fiscal or monetary) here so there is no way to tell whether 2013 was more or less contractionary than the counterfactual. Most of the so-called contractionary fiscal policy was dividends from GSEs, so the actual deficit reduction was tiny compared to the noise in NGDP growth and therefore could not ever be empirically extracted from the data. The result was indeterminate.
3.  Those who correctly predicted that the BOJ could sharply depreciate the yen, if they wanted to.
How can we tell if the BOJ "wants" to do something? Do moves in the exchange rate correspond to changes in NGDP? How do we convert "fall in exchange rate" to "rise in NGDP"?
4.  Those who claimed Bernanke was wrong in claiming monetary policy was highly accommodative in the years after 2008.  A critique that has now been confirmed by Vasco Curdia.)
What is the counterfactual? Would more accommodating monetary policy resulted in higher NGDP? No recession? We don't know.
5.  Those who said the Fed’s predictions for real GDP growth were too high.
How "high"? How is this discernible given noise and errors in the data? The Fed's predictions of RGDP are actually ok given the fluctuations in the measurements:


Also, I got these right -- shouldn't we listen to me?
6.  Those who said that if we ended the extended unemployment benefits, the unemployment rate would fall back to the natural rate faster than the Fed expected.  (Note this is the opposite of the previous prediction, which makes the success of both predictions especially interesting.)
This one also appears to be incorrect.

The acceleration in hiring appears to actually be the result of hiring in the health care industry due to the implementation of Obamacare because the increased rate of hiring is confined to the health care industry.
7.  Those who first suggested that central banks could do negative IOR, and that markets would treat the policy as expansionary.
Did the markets move far enough to indicate something other than random noise? How big should the effect be?
8.  Those who predicted that Trichet’s contractionary monetary policy of 2010-11, in response to transitory price rises from oil and VAT, was a big mistake.  Ditto for those who made the same prediction about Sweden.
Is this a mistake? Do we know the counterfactual without the contractionary policy? No.
9.  Those who first spotted the fact that the UK’s problem was productivity, not jobs, and hence that fiscal austerity was not the problem.
Sumner's model has inputs for productivity?
10.  Those who predicted low interest rates as far as the eye can see.
What is "low"? "[A]s far as the eye can see" would be ok here (since it means indefinitely), but there is an implied conditional -- if the Fed creates an NGDP futures market and targets it, will rates rise? How far? What about a different policy? What will raise rates and by how much?

Monday, November 2, 2015

Musical chairs and Taylor rules

Scott Sumner recently reiterated his musical chairs model. John Handley has the best take on that post. However, I thought I'd try to analyze the musical chairs model -- here it is ...
The Musical Chairs model: 
1.  In the short run, employment fluctuations are driven by variations in the NGDP/Wage ratio.
2.  Monetary policy drives NGDP, by influencing the supply and demand for base money.
3.  Nominal wages are sticky in the short run, and hence NGDP shocks cause variations in employment in the same direction.
4.  In the long run, wages are flexible and adjust to changes in NGDP. Unemployment returns to the natural rate (currently about 5% in the US.)
So let's look at 1 and 3: (1) "In the short run, employment fluctuations are driven by variations in the NGDP/Wage ratio." (3) "Nominal wages are sticky in the short run, and hence NGDP shocks cause variations in employment in the same direction." This comes down to: In the short run, employment fluctuations are driven by variations in NGDP.

If prices are sticky along with wages, then the price level is slowly varying in the short run and therefore changes in RGDP are proportional to changes in NGDP and we can say: In the short run, employment fluctuations are driven by variations in RGDP. This is just Okun's law. So Sumner's 1 and 3 are just Okun's law.

Next we'll tackle number 4. Let's split 4 into two pieces. First piece first:
4a. In the long run, wages are flexible and adjust to changes in NGDP.
Since the short run changes in RGDP are already accounted for in the changes in employment (Sumner's 1 and 3), the primary change in long run NGDP must come from changes in the price level. This means 4a is really just the statement: Wages are a price.

Now for the second piece:
4b. Unemployment returns to the natural rate (currently about 5% in the US.)
This is just an assumption that a unique equilibrium exists -- that the fluctuations in 1 and 3 are around some level.

Sumner's 2 is a bit vague as presented here, but in previous posts (I link to here) Sumner says that the central bank sets expectations for NGDP with monetary policy. Instead of a single NGDP futures market target (his preferred policy), the Fed sets this with a combination of a de facto inflation target, IOR and its internal forecasts of NGDP and inflation.

Additionally, Sumner says his model is consistent with rational expectations, therefore the expected value of NGDP at time period t+1 is the actual value of NGDP at time period t+1 plus an (unbiased) error. He also adds in the possibility of a systematic error term stemming from the difference between a targeting an ideal liquid NGDP futures market price/growth rate and whatever the central bank does in practice.

However, we can frame this in terms of a Taylor rule. We start with:

i = π + r* + a (π - π*) + a (y - y*)

Sometimes the parameter a is taken to be different for the two terms, but let's keep them the same. Now let's re-arrange:

i - π - r* = a (π + y - π* - y*)

Using n ≡ π + y and n* ≡ π* + y*, we have

i - π - r* = a (n - n*)

The LHS is the difference between the short term nominal interest rate target and the equilibrium nominal interest rate i* = π + r*. We can call r* the equilibrium Wicksellian rate if we'd like. Let's define this deviation to be Sumner's systematic error (SE) plus random unbiased error σ:

i - π - r* ≡ SE + σ

We don't really know what SE is (except in the case of a central bank targeting an NGDP futures market), so this is completely valid. We have:

SE + σ = a (n - n*)

If a = 1, this is Sumner's equation (2) and (3) shown at this link. That means monetary policy in Sumner's model can be represented as a Taylor rule that is always off (in the long run) by some amount SE unless the central bank targets an NGDP futures market.

This of course should make a lot of sense from Sumner's view. If a central bank perfectly targeted an NGDP futures market, then whatever the nominal interest rates was, it would be the equilibrium nominal interest rate. If there was some other less efficient monetary policy target, then that policy could be expressed as a deviation of the observed nominal interest rate from the equilibrium nominal interest rate (whatever it was).

The systematic error can be expressed as a function of the interest rate SE = SE(i). Interest rates being "high" are a sign of loose monetary policy, rates being "low" are a sign of tight monetary policy (per Sumner's invocation of Milton Friedman). Therefore SE should be positive when monetary policy is "loose", negative when it is "tight" and zero when it is 'right on' as per an NGDP futures market.

Putting this all together:

1. Okun's law in the short run
2. A Taylor rule: i = π + r* + (n - n*) = π + r* + SE(i)
3. Okun's law in the short run
4. Wages are a price. There is an equilibrium in the labor market.

Overall, 1, 3 and 4 are basically true of any economic model (unless it is inconsistent with the data). Therefore you could probably represent Sumner's model as a New Keynesian model with a particular Taylor rule containing a systematic error term.

Note: n - n* is never zero unless you target an NGDP futures market, and on average n - n* = SE(i). This doesn't really pin the model down completely since SE(i) is determined by some vague judgment calls about whether interest rates are "high" or "low" relative to where they "should be". However if someone (John Handley?) wanted to, someone could generically take SE to be a negative value approximately equal to the current (nominal) output gap.

Update 11/3/2015

I wanted to make clearer that the point of this post was that Sumner's musical chairs model as described is so generic that it is likely any number of DSGE models (New Keynesian or otherwise) could fit the 4 listed characteristics while having wildly different policy implications.

Analogous to the need for at least three points to define a plane, we need more than these 4 characteristics to define an specific economic model.