Wednesday, August 15, 2012

On Prices, Narratives, and Market Efficiency

The fourth anniversary of the Lehman bankruptcy has been selected as the release date for a collection of essays edited by Diane Coyle with the provocative title: What's the Use of Economics? The timing is impeccable and the question legitimate.

The book collects together some very thoughtful responses by Andrew Haldane, John Kay, Wendy Carlin, Alan Kirman, Andrew Lo, Roger Farmer, and a host of other luminaries (the publishers were kind enough to send me an advance copy). There's enough material there for several posts but I'd like to start with the contribution by John Kay.

This one, as it happens, has been published before; I discussed Mike Woodford's reaction to it in a previous post. But reading it again I realized that it contains a perspective on market efficiency and price discovery that is concise, penetrating and worthy of some elaboration. Kay doesn't just provide a critique of the efficient markets hypothesis; he sketches out an alternative approach based on the idea of prices as the "product of a clash between competing narratives" that can form the basis of an entire research agenda.

He begins with a question famously posed by the Queen of England during a visit to the London School of Economics: Why had economists failed to predict the financial crisis? Robert Lucas pointed out in response that the inability to predict a financial crisis was in fact a prediction of economic theory. This is as pure a distillation of the efficient markets hypothesis is one is likely to find, and Kay uses it to evaluate the hypothesis itself:
Lucas’s assertion that ‘no one could have predicted it’ contains an important, though partial, insight. There can be no objective basis for a prediction of the kind ‘Lehman Bros will go into liquidation on September 15’, because if there were, people would act on that expectation and, most likely, Lehman would go into liquidation straight away. The economic world, far more than the physical world, is influenced by our beliefs about it. 
Such thinking leads, as Lucas explains, directly to the efficient market hypothesis – available knowledge is already incorporated in the price of securities. And there is a substantial amount of truth in this – the growth prospects of Apple and Google, the problems of Greece and the Eurozone, are all reflected in the prices of shares, bonds and currencies. The efficient market hypothesis is an illuminating idea, but it is not “Reality As It Is In Itself”. Information is reflected in prices, but not necessarily accurately, or completely. There are wide differences in understanding and belief, and different perceptions of a future that can be at best dimly perceived. 
In his Economist response, Lucas acknowledges that ‘exceptions and anomalies’ to the efficient market hypothesis have been discovered, ‘but for the purposes of macroeconomic analyses and forecasts they are too small to matter’. But how could anyone know, in advance not just of this crisis but also of any future crisis, that exceptions and anomalies to the efficient market hypothesis are ‘too small to matter’?
The literature on anomalies is not, in fact, concerned with macroeconomic analyses and forecasts. It is rather narrowly focused on predictability in asset prices and the possibility of constructing portfolios that can consistently beat the market on a risk-adjusted basis. And indeed, such anomalies are often found to be quite trivial, especially when one considers the costs of implementing the implied strategies. The inability of actively managed funds to beat the market on average, after accounting for costs and adjusting for risk, is often cited as providing empirical support for market efficiency. But Kay believes that these findings have not been properly interpreted:
What Lucas means when he asserts that deviations are ‘too small to matter’ is that attempts to construct general models of deviations from the efficient market hypothesis – by specifying mechanical trading rules or by writing equations to identify bubbles in asset prices – have not met with much success. But this is to miss the point: the expert billiard player plays a nearly perfect game, but it is the imperfections of play between experts that determine the result. There is a – trivial – sense in which the deviations from efficient markets are too small to matter – and a more important sense in which these deviations are the principal thing that matters. 
The claim that most profit opportunities in business or in securities markets have been taken is justified.  But it is the search for the profit opportunities that have not been taken that drives business forward, the belief that profit opportunities that have not been arbitraged away still exist that explains why there is so much trade in securities. Far from being ‘too small to matter’, these deviations from efficient market assumptions, not necessarily large, are the dynamic of the capitalist economy. 
Such anomalies are idiosyncratic and cannot, by their very nature, be derived as logical deductions from an axiomatic system. The distinguishing characteristic of Henry Ford or Steve Jobs, Warren Buffett or George Soros, is that their behaviour cannot be predicted from any prespecified model. If the behaviour of these individuals could be predicted in this way, they would not have been either innovative or rich. But the consequences are plainly not ‘too small to matter’. 
The preposterous claim that deviations from market efficiency were not only irrelevant to the recent crisis but could never be relevant is the product of an environment in which deduction has driven out induction and ideology has taken over from observation. The belief that models are not just useful tools but also are capable of yielding comprehensive and universal descriptions of the world has blinded its proponents to realities that have been staring them in the face. That blindness was an element in our present crisis, and conditions our still ineffectual responses. 
Fair enough, but how should one proceed? Kay suggests the adoption of more "eclectic analysis... not just deductive logic but also an understanding of processes of belief formation, anthropology, psychology and organisational behaviour, and meticulous observation of what people, businesses, and governments actually do."

I have no quarrel with this prescription, but I'd also like to make a case for more creative and versatile deductive logic. One of the key modeling hypotheses in the economics of information is the so-called Harsanyi doctrine (or common prior assumption), which stipulates that all differences in beliefs ought to be modeled as if they arise from differences in information. This hypothesis implies that individuals can only disagree if such disagreement is not itself common knowledge: they cannot agree to disagree. It is not hard to see that such a hypothesis could not possibly allow for pure speculation on asset price movements, and hence cannot account for the large volume of trade in financial markets. In fact, it implies that order books in many markets would be empty, since a posted price would only be met by someone with superior information.

The point is that over-reliance on deductive logic is not the only problem as far as financial modeling is concerned; the core assumptions to which deductive logic has been applied are themselves too restrictive. To my mind, the most interesting part of Kay's essay suggests how one might improve on this:
You can learn a great deal about deviations from the efficient market hypothesis, and the role they played in the recent financial crisis, from journalistic descriptions by people like Michael Lewis and Greg Zuckerman, who describe the activities of some individuals who did predict it. The large volume of such material that has appeared suggests many avenues of understanding that might be explored. You could develop models in which some trading agents have incentives aligned with those of the investors who finance them and others do not. You might describe how prices are the product of a clash between competing narratives about the world. You might appreciate the natural human reactions that made it difficult to hold short positions when they returned losses quarter after quarter.
There is definitely ongoing work in economics that explores many of these directions, some of which I have surveyed in previous posts. But the idea of prices as the product of a clash between competing narratives about the world reminded me of a paper by Harrison and Kreps, which was one of the earliest models in finance to shed the common prior assumption.

For anyone interested in developing models of heterogeneous beliefs in which trading occurs naturally over time, the Harrison-Kreps paper is the perfect place to start. They illustrate their model with an example that is easy to follow: a single asset provides title to a stream of dividend payments that may be either high or low, and investors disagree about the likelihood of transitions from high to low states and vice versa. This means that investors who value the asset most in one state differ from those who value it most in the other. Trading occurs as the asset is transferred across investors in the two different belief classes each time a transition to a different state occurs. The authors show that the price in both states is higher than it would be if investors were forced to hold the asset forever: there is a speculative premium that arises from the knowledge that someone else will, in due course and mistakenly in your opinion, value the asset more than you do. The contrast with the efficient markets hypothesis is striking and clear:
The basic tenet of fundamentalism, which goes back at least to J. B. Williams (1938), is that a stock has an intrinsic value related to the dividends it will pay, since a stock is a share in some enterprise and dividends represent the income that the enterprise gains for its owners. In one sense, we think that our analysis is consistent with the fundamentalist spirit, tempered by a subjectivist view of probability. Beginning with the view that stock prices are created by investors, and recognizing that investors may form different opinions even when they have the same substantive information, we contend that there can be no objective intrinsic value for the stock. Instead, we propose that the relevant notion of intrinsic value is obtained through market aggregation of diverse investor assessments. There are fundamentalist overtones in this position, since it is the market aggregation of investor attitudes and beliefs about future dividends with which we start. Under our assumptions, however, the aggregation process eventually yields prices with some curious characteristics. In particular, investors attach a higher value to ownership of the stock than they do to ownership of the dividend stream that it generates, which is not an immediately palatable conclusion from a fundamentalist point of view.
The idea that prices are "obtained through market aggregation of diverse investor assessments" is not too far from Kay's more rhetorically powerful claim that they are "the product of a clash between competing narratives".  What Harrison and Kreps do not consider is how diverse investor assessments change over time, since beliefs about transition probabilities are exogenously given in their analysis. But Kay's formulation suggests how progress on this front might be made. Beliefs change as some narratives gather influence relative to others, either though active persuasion (talking one's book for instance) or through differentials in profits accruing to those with different worldviews. While Kay is surely correct that a rich understanding of this process requires more than deductive reasoning, it is also true that deductive reasoning has not yet been pushed to its limits in facilitating our understanding of market dynamics.

Monday, August 13, 2012

Building a Better Dow

The following post, written jointly with Debraj Ray, is based on our recent note proposing a change in the method for computing the Dow.

---

With a market capitalization approaching 600 billion, Apple is currently the largest publicly traded company in the world. The previous title-holder, Exxon Mobil, now stands far behind at 400 billion. But Apple is not a component of the Dow Jones Industrial Average. Nor is Google, with a higher valuation than all but a handful of firms in the index. Meanwhile, firms with less than a tenth of Apple's market capitalization, including Alcoa and Hewlett-Packard, continue to be included.

The exclusion of firms like Apple and Google would appear to undermine the stated purpose of the index, which is "to provide a clear, straightforward view of the stock market and, by extension, the U.S. economy." But there are good reasons for such seemingly arbitrary omissions. The Dow is a price-weighted index, and the average price of its thirty components is currently around $58. Both Apple and Google have share prices in excess of $600, and their inclusion would cause day-to-day changes in the index to be driven largely by the behavior of these two securities. For instance, their combined weight in the Dow would be about 43% if they were to replace Alcoa and Travelers, which are the two current components with the lowest valuations. Furthermore, the index would become considerably more volatile even if the included stocks were individually no more volatile than those they replace. As John Presbo, chairman of the index oversight committee, has observed, such heavy dependence of the index on one or two stocks would "hamper its ability to accurately reflect the broader market."

Indeed, price-weighting is decidedly an odd methodology. IBM has a smaller market capitalization than Microsoft, but a substantially higher share price. Under current conditions, a 1% change in the price of IBM has an effect on the index that is almost seven times as great as a 1% change in the price of Microsoft. In fact, IBM's weight in the index is above 11%, although its valuation is less than 6% of the total among Dow components.

This issue does not arise with value-weighted indexes such as the S&P 500. But as Prestbo and others have pointed out, the Dow provides an uninterrupted picture of stock market movements dating back to 1896. An abrupt switch to value weighting would introduce a methodological discontinuity that would "essentially obliterate this history." Attention has therefore been focused on the desirability of a stock split, which would reduce Apple's share price to a level that could be accommodated by the questionable methodology of the Dow.

But an abrupt switch to a value weighting or the flawed artifice of a stock split are not the only available alternatives. In a recent paper we propose a modification that largely preserves the historical integrity of the Dow time series, while allowing for the inclusion of securities regardless of their market price. Our modified index also leads to a smooth and gradual transition, as incumbent stocks are replaced, to a fully value-weighted index in the long run.

The proposed index is composed of two subindices, one price-weighted to respect the internal structure of the Dow, and the other value-weighted to apply to new entrants. The index has two parameters, both of which are adjusted whenever a substitution is made. One of these maintains continuity in the value of the index, while the other ensures that the two subindices are weighted in proportion to their respective market capitalizations. Stock splits require a change in parameters (as in the case of the current Dow divisor) but only if the split occurs for a firm in the price-weighted subindex.

Once all incumbent firms are replaced, the result will be a fully value-weighted index. In practice this could take several decades, as some incumbent firms are likely to be remain components far into the future. But firms in the price-weighted component of the index that happen to have weights roughly commensurate with their market capitalization can be transferred with no loss of continuity to the value-weighted component. This procedure, which we call bridging, can accelerate the transition to a value-weighted index with minimal short-term disruption. Currently Coca Cola and Disney are prime candidates for bridging.

Under our proposed index, Apple would enter with a weight of less than 13% if it were to replace Alcoa. This is scarcely more than the weight currently associated with IBM, a substantially smaller company. Adding Google (in place of HP or Travelers) would further lower the weight of Apple since the total market capitalization of Dow components would rise. This is a relatively modest change that, we believe, would simultaneously serve the desirable goals of methodological continuity and market representativeness.

Friday, July 13, 2012

Market Overreaction: A Case Study

At 7:30pm yesterday the Drudge Report breathlessly broadcast the following:
ROMNEY NARROWS VP CHOICES; CONDI EMERGES AS FRONTRUNNER
Thu Jul 12 2012 19:30:01 ET 
**Exclusive** 
Late Thursday evening, Mitt Romney's presidential campaign launched a new fundraising drive, 'Meet The VP' -- just as Romney himself has narrowed the field of candidates to a handful, sources reveal. 
And a surprise name is now near the top of the list: Former Secretary of State Condoleezza Rice! 
The timing of the announcement is now set for 'coming weeks'.
The reaction on Intrade was immediate. The price of a contract that pays $10 if Rice is selected as Romney's running mate (and nothing otherwise) shot up from about 35 cents to $2, with about 2500 contracts changing hands within twenty minutes of the Drudge announcement. By the sleepy standards of the prediction market this constitutes very heavy volume. Nate Silver responded at 7:49 as follows:
The Condi Rice for VP contract at Intrade possibly the most obvious short since Pets.com
Good advice, as it turned out. By 9:45 pm the price had dropped to 90 cents a contract with about 5000 contracts traded in total since the initial announcement. Here's the price and volume chart:


One of the most interesting aspects of markets such as Intrade is that they offer sets of contracts on a list of exhaustive and mutually exclusive events. For instance, the Republican VP Nominee market contains not just the contract for Rice, but also for 56 other potential candidates, as well as a residual contract that pays off if none of the named contracts do. The sum of the bids for all these contracts cannot exceed $10, otherwise someone could sell the entire set of contracts and make an arbitrage profit. In practice, no individual is going to take the trouble to spot and exploit such opportunities, but it's a trivial matter to write a computer program that can do so as soon as they arise.

In fact, such algorithms are in widespread use on Intrade, and easy to spot. The sharp rise in the Rice contract caused the arbitrage condition to be momentarily violated and simultaneous sales of the entire set of contracts began to occur. While the price of one contract rose, the prices of the others (Portman, Pawlenty, and Ryan especially) were knocked back as existing bids started to be filled by algorithmic instruction. But as new bidders appeared for these other contracts the Rice contract itself was pushed back in price, resulting in the reversal seen in the above chart. All this in a matter of two or three hours.

Does any of this have relevance for the far more economically significant markets for equity and debt? There's a fair amount of direct evidence that these markets are also characterized by overreaction to news, and such overreaction is consistent with the excess volatility of stock prices relative to dividend flows. But overreactions in stock and bond markets can take months or years to reverse.  Benjamin Graham famously claimed that "the interval required for a substantial undervaluation to correct itself averages approximately 1½ to 2½ years," and DeBondt and Thaler found that "loser" portfolios (composed of stocks that had previously experienced sharp capital losses) continued to outperform "winner" portfolios (composed of those with significant prior capital gains) for up to five years after construction.

One reason why overreaction to news in stock markets takes so long to correct is that there is no arbitrage constraint that forces a decline in other assets when one asset rises sharply in price. 
In prediction markets, such constraints cause immediate reactions in related contracts as soon as one contract makes a major move. Similar effects arise in derivatives markets more generally: options prices respond instantly to changes in the price of the underlying, futures prices move in lock step with spot prices, and exchange-traded funds trade at prices that closely track those of their component securities. Most of this activity is generated by algorithms designed to sniff out and snap up opportunities for riskless profit. But the primitive assets in our economy, stocks and bonds, are constrained only by beliefs about their future values, and can therefore wander far and wide for long periods before being dragged back by their cash flow anchors.

---

Update (7/13). Mark Thoma and Yves Smith have both reposted this, with interesting preludes. Here's Yves:
I’d like to quibble with the notion that there is such a thing as a correct price for as vague a promise as a stock (by contrast, for derivatives, it is possible to determine a theoretical price in relationship to an actively traded underlying instrument, so even though the underlying may be misvalued, the derivative’s proper value given the current price and other parameters can be ascertained).  
Sethi suggests that stocks have “cash flow anchors”. I have trouble with that notion. A bond is a very specific obligation: to pay interest in specified amounts on specified dates, and to repay principal as of a date certain... By contrast, a stock is a very unsuitable instrument to be traded on an arm’s length, anonymous basis. A stock is a promise to pay dividends if the company makes enough money and the board is in the mood to do so. Yes, you have a vote, but your vote can be diluted at any time. There aren’t firm expectations of future cash flows; it’s all guess work and heuristics.
I chose the term "anchor" with some care, because the rode of an anchor is not always taut. I didn't mean to suggest that there is a single proper value for a stock that can be unambiguously deduced from the available information; heterogeneity in the interpretation of information alone is enough to generate a broad range of valuations. This can allow for drift in various directions as long as the price doesn't become too far detached from earnings projections.

Mark argues that the leak to Drudge was an attempt at distraction:
Rajiv Sethi looks at the reaction to the Romney campaign's attempt to change the subject from Romney's role at Bain to potential picks for vice-president (as far as I can tell, Rice has no chance -- she's "mildly pro-choice" for one -- so this was nothing more than an attempt to divert attention from Bain, an attempt one that seems to have worked, at least to some extent).
This view, which seems to be held left and right, was brilliantly summed up by Nate Silver as follows:
drudge (v.): To leak news to displace an unfavorable headline; to muddy up the news cycle.
I was tempted to reply to Nate's tweet with:
twartist (n.): One who is able by virtue of imagination and skill to create written works of aesthetic value in 140 characters or less.
But it seems that the term is already in use.

Saturday, June 30, 2012

Fighting over Claims

This brief segment from a recent speech by Joe Stiglitz sums up very neatly the nature of our current economic predicament (emphasis added):
We should realize that the resources in our economy... today is the same is at was five years ago. We have the same human capital, the same physical capital, the same natural capital, the same knowledge... the same creativity... we have all these strengths, they haven't disappeared. What has happened is, we're having a fight over claims, claims to resources. We've created more liabilities... but these are just paper. Liabilities are claims on these resources. But the resources are there. And the fight over the claims is interfering with our use of the resources
I think this is a very useful way to think about the potential effectiveness under current conditions of various policy proposals, including conventional fiscal and monetary stabilization policies.

Part of the reason for our anemic and fitful recovery is that contested claims, especially in the housing market, continue to be settled in a chaotic and extremely wasteful manner. Recovery from subprime foreclosures is typically a small fraction of outstanding principal, and properly calibrated principal write-downs can often benefit both borrowers and lenders. Modifications that would occur routinely under the traditional bilateral model of lending are much harder to implement when lenders are holders of complex structured claims on the revenues generated by mortgage payments. Direct contact between lenders and borrowers is neither legal nor practicable in this case, and the power to make modifications lies instead with servicers. But servicer incentives are not properly aligned with those of the lenders on whose behalf they collect and process payments. The result is foreclosure even when modification would be much less destructive of resources.

Despite some indications that home values are starting to rise again, the steady flow of defaults and foreclosures shows no sign of abating. Any policy that stands a chance of getting us back to pre-recession levels of resource utilization has to result in the quick and orderly settlement of these claims, with or without modification of the original contractual terms. And it's not clear to me that the blunt instruments of conventional stabilization policy can accomplish this.

Consider monetary policy for instance. The clamor for more aggressive action by the Fed has recently become deafening, with a long and distinguished line of advocates (see, for instance, recent posts by Miles KimballJoseph Gagnon, Ryan AventScott Sumner, Paul Krugman, and Tim Duy). While the various proposals differ with respect to details the idea seems to be the following: (i) the Fed has the capacity to increase inflation and nominal GDP should it choose to do so, (ii) this can be accomplished by asset purchases on a large enough scale, and (iii) doing this would increase not only inflation and nominal GDP but also output and employment.

It's the third part of this argument with which I have some difficulty, because I don't see how it would help resolve the fight over claims that is crippling our recovery. Higher inflation can certainly reduce the real value of outstanding debt in an accounting sense, but this doesn't mean that distressed borrowers will be able to meet their obligations at the originally contracted terms. In order for them to do so, it is necessary that their nominal income rises, not just nominal income in the aggregate. And monetary policy via asset purchases would seem to put money disproportionately in the pockets of existing asset holders, who are more likely to be creditors than debtors. Put differently, while the Fed has the capacity to raise nominal income, it does not have much control over the manner in which this increment is distributed across the population. And the distribution matters.

Similar issues arise with inflation. Inflation is just the growth rate of an index number, a weighted average of prices for a broad range of goods and services. The Fed can certainly raise the growth rate of this average, but has virtually no control over its individual components. That is, it cannot increase the inflation rate without simultaneously affecting relative prices. For instance, purchases of assets that drive down long term interest rates will lead to portfolio shifts and an increase in the price of commodities, which are now an actively traded asset class. This in turn will raise input costs for some firms more than others, and these cost increases will affect wages and prices to varying degrees depending on competitive conditions. As Dan Alpert has argued, expansionary monetary policy under these conditions could even "collapse economic activity, as limited per capita wages are shunted to oil and food, rather than to more expansionary forms of consumption."

I don't mean to suggest that more aggressive action by the Fed is unwarranted or would necessarily be counterproductive, just that it needs to be supplemented by policies designed to secure the rapid and efficient settlement of conflicting claims.

One of the most interesting proposals of this kind was floated back in October 2008 by John Geanakoplos and Susan Koniak, and a second article a few months later expanded on the original. It's worth examining the idea in detail. First, deadweight losses arising from foreclosure are substantial:
For subprime and other non-prime loans, which account for more than half of all foreclosures, the best thing to do for the homeowners and for the bondholders is to write down principal far enough so that each homeowner will have equity in his house and thus an incentive to pay and not default again down the line... there is room to make generous principal reductions, without hurting bondholders and without spending a dime of taxpayer money, because the bond markets expect so little out of foreclosures. Typically, a homeowner fights off eviction for 18 months, making no mortgage or tax payments and no repairs. Abandoned homes are often stripped and vandalized. Foreclosure and reselling expenses are so high the subprime bond market trades now as if it expects only 25 percent back on a loan when there is a foreclosure.
Second, securitization precludes direct contact between borrowers and lenders:
In the old days, a mortgage loan involved only two parties, a borrower and a bank. If the borrower ran into difficulty, it was in the bank’s interest to ease the homeowner’s burden and adjust the terms of the loan. When housing prices fell drastically, bankers renegotiated, helping to stabilize the market. 
The world of securitization changed that, especially for subprime mortgages. There is no longer any equivalent of “the bank” that has an incentive to rework failing loans. The loans are pooled together, and the pooled mortgage payments are divided up among many securities according to complicated rules. A party called a “master servicer” manages the pools of loans. The security holders are effectively the lenders, but legally they are prohibited from contacting the homeowners.
Third, the incentives of servicers are not aligned with those of lenders:
Why are the master servicers not doing what an old-fashioned banker would do? Because a servicer has very different incentives. Most anything a master servicer does to rework a loan will create big winners but also some big losers among the security holders to whom the servicer holds equal duties... By allowing foreclosures to proceed without much intervention, they avoid potentially huge lawsuits by injured security holders. 
On top of the legal risks, reworking loans can be costly for master servicers. They need to document what new monthly payment a homeowner can afford and assess fluctuating property values to determine whether foreclosing would yield more or less than reworking. It’s costly just to track down the distressed homeowners, who are understandably inclined to ignore calls from master servicers that they sense may be all too eager to foreclose.
And finally, the proposed solution:
To solve this problem, we propose legislation that moves the reworking function from the paralyzed master servicers and transfers it to community-based, government-appointed trustees. These trustees would be given no information about which securities are derived from which mortgages, or how those securities would be affected by the reworking and foreclosure decisions they make. 
Instead of worrying about which securities might be harmed, the blind trustees would consider, loan by loan, whether a reworking would bring in more money than a foreclosure... The trustees would be hired from the ranks of community bankers, and thus have the expertise the judiciary lacks...  
Our plan does not require that the loans be reassembled from the securities in which they are now divided, nor does it require the buying up of any loans or securities. It does require the transfer of the servicers’ duty to rework loans to government trustees. It requires that restrictions in some servicing contracts, like those on how many loans can be reworked in each pool, be eliminated when the duty to rework is transferred to the trustees... Once the trustees have examined the loans — leaving some unchanged, reworking others and recommending foreclosure on the rest — they would pass those decisions to the government clearing house for transmittal back to the appropriate servicers... 
Our plan would keep many more Americans in their homes, and put government money into local communities where it would make a difference. By clarifying the true value of each loan, it would also help clarify the value of securities associated with those mortgages, enabling investors to trade them again. Most important, our plan would help stabilize housing prices.
As with any proposal dealing with a problem of such magnitude and complexity, there are downsides to this. Anticipation of modification could induce borrowers who are underwater but current with their payments to default strategically in order to secure reductions in principal. Such policy-induced default could be mitigated by ensuring that only truly distressed households qualify. But since current financial distress is in part a reflection of past decisions regarding consumption and saving, some are sure to find the distributional effects of the policy galling. Nevertheless, it seems that something along these lines needs to be attempted if we are to get back to pre-recession levels of resource utilization anytime soon. And the urgency of action does seem to be getting renewed attention.

The bottom line, I think, is this: too much faith in the traditional tools of macroeconomic stabilization under current conditions is misplaced. One can conceive of dramatically different approaches to monetary policy, such as direct transfers to households, but these would surely face insurmountable legal and political obstacles. It is essential, therefore, that macroeconomic stabilization be supplemented by policies that are microeconomically detailed, fine grained, and directly confront the problem of balance sheet repair. Otherwise this enormously costly fight over claims will continue to impede the use of our resources for many more years to come.

Sunday, June 24, 2012

Reciprocal Fear and the Castle Doctrine Laws

In his timeless classic The Strategy of Conflict, Thomas Schelling began a chapter on the "reciprocal fear of surprise attack" as follows:
If I go downstairs to investigate a noise at night, with a gun in my hand, and find myself face to face with a burglar who has a gun in his hand, there is a danger of an outcome that neither of us desires. Even if he prefers to just leave quietly, and I wish him to, there is danger that he may think I want to shoot, and shoot first. Worse, there is danger that he may think that I think he wants to shoot. Or he may think that I think he thinks I want to shoot. And so on. "Self-Defense" is ambiguous, when one is only trying to preclude being shot in self-defense.
This effect is empirically important, and is part of the reason why homicide rates vary so greatly across otherwise similar locations, and can change so sharply over time at a given location. In our attempt to understand why the Newark homicide rate doubled in just six years from 2000-2006 while the national rate remained essentially constant, Dan O'Flaherty and I found a substantial number of homicides to be the outcome of escalating disputes between strangers or acquaintances often over seemingly trivial matters. High rates of homicide make for a tense and fearful environment within which the preemptive motive for killing starts to loom large, and this itself reinforces the cycle of tension, fear, and continued killing. Incremental reductions in homicide under such circumstances are unlikely to be feasible, but sudden large scale reductions that transform the environment and break the cycle can sometimes be attained. Similar effects arise with international arms races.

In the jargon of economics, homicide is characterized by strategic complementarity: any increase in the willingness of one set of individuals to kill will be amplified by increases in the willingness of others to kill preemptively, and so on, in an expectations driven cascade. Any change in fundamentals can set this process off, such as a breakdown in law enforcement, easier availability of firearms, or increases in the value of a contested resource.

The logic of strategic complementarity implies that a broadening of the notion of justifiable homicide, in an attempt to benefit potential victims of crime, can have tragic and entirely counterproductive effects. Florida's 2005 stand-your-ground law is an example of this, and more than twenty other states have adopted similar legislation in its wake. These are sometimes called castle doctrine laws, since they extend to other locations the principle that one does not have a duty to retreat in one's own home (or "castle").

Enough time has elapsed since the passage of these laws for an empirical analysis of their effects to be be conducted, and a recent paper by Cheng and Hoekstra does exactly this. Determining the causal effects of any change in the legal environment is always a tricky business. The authors tackle the problem by grouping states into those that adopted such laws and those that did not, and comparing within-state changes in outcomes across the two groups of states (the so-called difference in differences identification strategy). Their findings are striking:
Results indicate that the prospect of facing additional self-defense does not deter crime. Specifically, we find no evidence of deterrence effects on burglary, robbery, or aggravated assault. Moreover, our estimates are sufficiently precise as to rule out meaningful deterrence effects. 
In contrast, we find significant evidence that the laws increase homicides... the laws increase murder and manslaughter by a statistically significant 7 to 9 percent, which translates into an additional 500 to 700 homicides per year nationally across the states that adopted castle doctrine. Thus, by lowering the expected costs associated with using lethal force, castle doctrine laws induce more of it... murder alone is increased by a statistically significant 6 to 11 percent. This is important because murder excludes non-negligent manslaughter classifications that one might think are used more frequently in self-defense cases. But regardless of how one interprets increases from various classifications, it is clear that the primary effect of strengthening self-defense law is to increase homicide.
These are statistical findings and refer to aggregate effects; no individual homicide can be attributed with certainty to a change in the legal environment, not even the one killing that has brought castle doctrine laws into national focus. Nevertheless, we now have compelling evidence that the adoption of such laws has led directly to several hundred deaths annually nationwide, with negligible deterrence effects on other crimes. While the latter finding may be surprising, the former should have been entirely predictable.

Tuesday, June 12, 2012

Elinor Ostrom, 1933-2012

The political scientist Elinor Ostrom, co-recipient of the 2009 Nobel Prize in Economics, died this morning at the age of 78. I met her just once, after a talk she gave at Columbia sometime in the 1990s. It was in a very interesting seminar series organized by Dick Nelson if I recall correctly.

I was a great admirer of Ostrom's research on common pool resources, and tried to interpret some of her insights from an evolutionary perspective in some joint work with E. Somanathan a while ago. I've written about her here on a couple of occasions, and once reviewed a book that was largely a celebration of her vision (she had a hand in no less than six chapters).

Here are some extracts from a post written soon after the Nobel announcement:
Ostrom’s extensive research on local governance has shattered the myth of inevitability surrounding the “tragedy of the commons” and curtailed the uncritical application of the free-rider hypothesis to collective action problems. Prior to her work it was widely believed that scarce natural resources such as forests and fisheries would be wastefully used and degraded or exhausted under common ownership, and therefore had to be either state owned or held as private property in order to be efficiently managed. Ostrom demonstrated that self-governance was possible when a group of users had collective rights to the resource, including the right to exclude outsiders, and the capacity to enforce rules and norms through a system of decentralized monitoring and sanctions. This is clearly a finding of considerable practical significance. 
As importantly, the award recognized an approach to research that is practically extinct in contemporary economics. Ostrom developed her ideas by reading and generalizing from a vast number of case studies of forests, fisheries, groundwater basins, irrigation systems, and pastures. Her work is rich in institutional detail and interdisciplinary to the core. She used game theoretic models and laboratory experiments to refine her ideas, but historical and institutional analysis was central to this effort. She deviated from standard economic assumptions about rationality and self-interest when she felt that such assumptions were at variance with observed behavior, and did so long before behavioral economics was in fashion... 
There is no doubt that her research has dramatically transformed our thinking about the feasibility and efficiency of common property regimes. In addition, it serves as a reminder that her eclectic and interdisciplinary approach to social science can be enormously fruitful. In making this selection at this time, it is conceivable that the Nobel Committee is sending a message that methodological pluralism is something our discipline would do well to restore, preserve and foster.
And from the book review:
Although several distinguished scholars have been affiliated with the workshop over the years, Ostrom remains its leading light and creative force. It is fitting, therefore, that the book concludes with her 1988 Presidential Address to the American Political Science Association. In this chapter, she identifies serious shortcomings in prevailing theories of collective action. Approaches based on the hypothesis of unbounded rationality and material self-interest often predict a “tragedy of the commons” and prescribe either privatization of common property or its appropriation by the state. Policies based on such theories, in her view, “have been subject to major failure and have exacerbated the very problems they were intended to ameliorate”. What is required, instead, is an approach to collective action that places reciprocity, reputation and trust at its core. Any such theory must take into account our evolved capacity to learn norms of reciprocity, and must incorporate a theory of boundedly rational and moral behavior. It is only in such terms that the effects of communication on behavior can be understood. Communication is effective in fostering cooperation, in Ostrom’s view, because it allows subjects to build trust, form group identities, reinforce reciprocity norms, and establish mutual commitment. The daunting task of building rigorous models of economic and political choice in which reciprocity and trust play a meaningful role is only just beginning... 
The key conclusions drawn by the contributors are nuanced and carefully qualified, but certain policy implications do emerge from the analysis. The most important of these is that local communities can often find autonomous and effective solutions to collective-action problems when markets and states fail to do so. Such institutions of self-governance are fragile: large-scale interventions, even when well-intentioned, can disrupt and damage local governance structures, often resulting in unanticipated welfare losses. When a history of successful community resource management is in evidence, significant interventions should be made with caution. Once destroyed, evolved institutions are every bit as difficult to reconstruct as natural ecosystems, and a strong case can be made for conserving those that achieve acceptable levels of efficiency and equity. By ignoring the possibility of self-governance, one puts too much faith in the benevolence of a national government that is too large for local problems and too small for global ones. Moreover, as Ostrom points out in the concluding chapter, by teaching successive generations that the solution to collective-action problems lie either in the market or in the state, “we may be creating the very conditions that undermine our democratic way of life”. The stakes could not be higher.
Earlier tributes to Ostrom from Vernon Smith and Paul Romer are well worth revisiting.

Friday, April 27, 2012

On Equilibrium, Disequilibrium, and Rational Expectations

There's been some animated discussion recently on equilibrium analysis in economics, starting with a provocative post by Noah Smith, vigorous responses by Roger Farmer and JW Mason, and some very lively comment threads (see especially the smart and accurate points made by Keshav on the latter posts). This is a topic of particular interest to me, and the debate gives me a welcome opportunity to resume blogging after an unusually lengthy pause.

As Farmer's post makes clear, equilibrium in an intertemporal model requires not only that individuals make plans that are optimal conditional on their beliefs about the future, but also that these plans are mutually consistent. The subjective probability distributions on the basis of which individuals make decisions are presumed to coincide with the objective distribution to which these decisions collectively give rise. This assumption is somewhat obscured by the representative agent construct, which gives macroeconomics the appearance of a decision-theoretic exercise. But the assumption is there nonetheless, hidden in plain sight as it were. Large scale asset revaluations and financial crises, from this perspective, arise only in response to exogenous shocks and not because many individuals come to realize that they have made plans that cannot possibly all be implemented.

Farmer points out, quite correctly, that rational expectations models with multiple equilibrium paths are capable of explaining a much broader range of phenomena than those possessed of a unique equilibrium. His own work demonstrates the truth of this claim: he has managed to develop models of crisis and depression without deviating from the methodology of rational expectations. The equilibrium approach, used flexibly with allowances for indeterminacy of equilibrium paths, is more versatile than many critics imagine.

Nevertheless, there are many routine economic transactions that cannot be reconciled with the hypothesis that individual plans are mutually consistent. For instance, it is commonly argued that hedging by one party usually requires speculation by another, since mutually offsetting exposures are rare. But speculation by one party does not require hedging by another, and an enormous amount of trading activity in markets for currencies, commodities, stock options and credit derivatives involves speculation by both parties to each contract. The same applies on a smaller scale to positions taken in prediction markets such as Intrade. In such transactions, both parties are trading based on a price view, and these views are inconsistent by definition. If one party is buying low planning to sell high, their counterparty is doing just the opposite. At most one of the parties can have subjective beliefs that are consistent with with the objective probability distribution to which their actions (combined with the actions of others) gives rise.

If it were not for fundamental belief heterogeneity of this kind, there could be no speculation. This is a consequence of Aumann's agreement theorem, which states that while individuals with different information can disagree, they cannot agree to disagree as long as their beliefs are derived from a common prior. That is, they cannot persist in disagreeing if their posterior beliefs are themselves common knowledge. The intuition for this is quite straightforward: your willingness to trade with me at current prices reveals that you have different information, which should cause me to revise my beliefs and alter my price view, and should cause you to do the same. Our willingness to transact with each other causes us both to shrink from the transaction if our beliefs are derived from a common prior.

Hence accounting for speculation requires that one depart, at a minimum, from the common prior assumption. But allowing for heterogeneous priors immediately implies mutual inconsistency of individual plans, and there can be no identification of subjective with objective probability distributions.

The development of models that allow for departures from equilibrium expectations is now an active area of research. A conference at Columbia last year (with Farmer in attendance) was devoted entirely to this issue, and Mike Woodford's reply to John Kay on the INET blog is quite explicit about the need for movement in this direction:
The macroeconomics of the future... will have to go beyond conventional late-twentieth-century methodology... by making the formation and revision of expectations an object of analysis in its own right, rather than treating this as something that should already be uniquely determined once the other elements of an economic model (specifications of preferences, technology, market structure, and government policies) have been settled.
There is a growing literature on heterogenous priors that I think could serve as a starting point for the development of such an alternative. However, it is not enough to simply allow for belief heterogeneity; one must also confront the question of how the distribution of (mutually inconsistent) beliefs changes over time. To a first approximation, I would argue that the belief distribution evolves based on differential profitability: successful beliefs proliferate, regardless of whether those holding them were broadly correct or just extremely fortunate. This has to be combined with the possibility that some individuals will invest considerable time and effort and bear significant risk to profit from large mismatches between the existing belief distribution and the objective distribution to which it gives rise. Such contrarian actions may be spectacular successes or miserable failures, but must be accounted for in any theory of expectations that is rich enough to be worthy of the name.

 --- 

Some of the issues discussed here are explored at greater length in an essay on market ecology that I presented at a symposium in honor of Duncan Foley last week. Duncan was among the first to see that the rational expectations hypothesis implicitly entailed the assumption of complete futures markets, and would therefore be difficult to "reconcile with the recurring phenomena of financial crisis and asset revaluation that play so large a role in actual capitalist economic life."