Thursday, December 02, 2010

Global Health and Wealth over Two Centuries

Here is a story in four minutes of remarkable divergence followed by rapid convergence in health and wealth across nations over the past two centuries (h/t David Kurtz)


Where the entire world was clustered in 1810 only sub-Saharan Africa remains. But even here there are profound stirrings of change.

I suspect that someday soon animations such as this will replace the soporific tables and charts than now appear as motivating evidence in economic papers.

---

Update (12/6). Pinkovskiy and Sala-i-Martin argue that over the past decade and a half, the nations of sub-Saharan Africa have experienced a dramatic and broad-based decline in poverty and inequality (h/t Mark Thoma):
African poverty reduction has been extremely general. Poverty fell for both landlocked and coastal countries, for mineral-rich and mineral-poor countries, for countries with favourable and unfavourable agriculture, for countries with different colonisers, and for countries with varying degrees of exposure to the African slave trade. The benefits of growth were so widely distributed that African inequality actually fell substantially...

It has often been suggested that geography and history matter significantly for the ability of Third World, and especially African, countries to grow and reduce poverty... Since these factors are permanent (and cannot be changed with good policy), they imply that some parts of Africa may be at a persistent growth disadvantage relative to others.

Yet... the African poverty decline has taken place ubiquitously, in countries that were slighted as well as in those that were favoured by geography and history. For every breakdown... the poverty rates for countries on either side of the breakdown tend to converge, with the disadvantaged countries reducing poverty significantly to catch up to the advantaged ones. Neither geographical nor historical disadvantages seem to be insurmountable obstacles to poverty reduction... even the most blighted parts of the poorest continent can set themselves firmly on the trend of limiting and even eradicating poverty within the space of a decade.
This is consistent with recent observations by Shanta Devarajan, Ngozi Okonjo-Iweala, and even the much-maligned Gordon Brown.

I have argued in a couple of earlier posts that sub-Saharan Africa may have entered what might be called a zone of uncertainty in which optimistic growth expectations can become self-fulfilling:
History can matter for long periods of time (for instance in occupational inheritance or the patrilineal descent of surnames) and then cease to constrain our choices in any significant way. Once reliable correlations can break down suddenly and completely; history is full of such twists and turns. As far as African prosperity is concerned, I believe that a discontinuity of this kind is inevitable if not imminent.

Friday, November 19, 2010

Foley, Sidrauski, and the Microfoundations Project

In a previous post I mentioned an autobiographical essay by Duncan Foley in which he describes in vivid detail his attempts to "alter and generalize competitive equilibrium microeconomic theory" so as to make its predictions more consonant with macroeconomic reality.  Much of this work was done in collaboration with Miguel Sidrauski while the two were members of the MIT faculty some forty years ago. Both men were troubled by the "classical scientific dilemma" facing economics at the time: the discipline had "two theories, the microeconomic general equilibrium theory, and the macroeconomic Keynesian theory, each of which seemed to have considerable explanatory power in its own domain, but which were incompatible." This led them to embark on a "search for a synthesis" that would bridge the gap.

This is how Duncan describes the basic theoretical problem they faced, the strategies they adopted in trying to solve it, the importance of the distinction between stock and flow equilibrium, and the desirability of a theory that allows for intertemporal plans to be mutually inconsistent in the aggregate (links added):
My intellectual preoccupation at M.I.T. was what has come to be called the "microeconomic foundations of macroeconomics." The general equilibrium theory forged by Walras and elaborated by Wald (1951), McKenzie (1959), and Arrow and Debreu (1954) can be used, with the assumption that markets exist for all commodities at all future moments and in all contingencies, to represent macroeconomic reality by simple aggregation. The resulting picture of macroeconomic reality, however, has several disturbing features. For one thing, competitive general equilibrium is efficient, so that it is incompatible with the unemployment of any resources productive enough to pay their costs of utilization. This is difficult to reconcile with the common observation of widely fluctuating rates of unemployment of labor and of capacity utilization of plant and equipment. General equilibrium theory reduces economic production and exchange to the pursuit of directly consumable goods and services, and as a result has no real role for money... The general equilibrium theory can accommodate fluctuations in output and consumption, but only as responses to external shocks to resource availability, technology or tastes. It is difficult to reconcile these relatively slowly moving factors with the large business-cycle fluctuations characteristic of developed capitalist economies. In assuming the clearing of markets for all contingencies in all periods, general equilibrium theory assures the consistency... of individual consumption, investment, and production plans, which is difficult to reconcile with the recurring phenomena of financial crisis and asset revaluation that play so large a role in actual capitalist economic life...

Keynes' theory, on the other hand, offers a systematic way around these problems. Keynes views money as central to the actual operation of developed capitalist economies, precisely because markets for all periods and contingencies do not exist to reconcile differences in agents' opinions about the future. Because agents cannot sell all their prospects on contingent claims markets, they are liquidity constrained. In a liquidity constrained economy there is no guarantee that all factor markets will clear without unemployed labor or unutilized productive capacity. Market prices are inevitably established in part by speculation on an uncertain future. As a result the economy is vulnerable to endogenous fluctuations as the result of herd psychology and self-fulfilling prophecy. From this point of view it is not hard to see why business cycle fluctuations are a characteristic of a productively and financially developed capitalist economy, nor why the potential for financial crisis is inherent in decentralized market allocation of investment...

But there are many loose ends in Keynes' argument. In presenting the equilibrium of short-term expectations that determines the level of output, income and employment in the short period, for example, Keynes argues that entrepreneurs hire labor and buy raw materials to undertake production because they form an expectation as to the volume of sales they will achieve when the production process runs its course... But Keynes offers no systematic alternative account of how entrepreneurs form a view of their prospects on the market to take the place of the assumption of perfect competition and market clearing. This turns out, in detail, to be a very difficult problem to solve.

Given the supply of nominal money, a fall in prices appears to be a possible endogenous source of increased liquidity. Keynes argues that the money price level is largely determined by the money wage level, but offers no systematic explanation of the dynamics governing the movements of money wages.

Though money is the fulcrum on which his theory turns, Keynes does not actually set out a theory of the economic origin or determinants of money. As a result it is difficult to relate the fluctuations in macroeconomic variables such as the velocity of money to the underlying process of the circulation of commodities.

On point after point Keynes' plausible macroeconomic concepts raise unanswered questions about the microeconomic behavior that might support them.

Thus economics in the late 1960s suffered from a classical scientific dilemma in that it had two theories, the microeconomic general equilibrium theory, and the macroeconomic Keynesian theory, each of which seemed to have considerable explanatory power in its own domain, but which were incompatible. The search for a synthesis which would bridge this gap seemed to me to be a good problem to work on. From the beginning the goal of my work in this area was to alter and generalize competitive equilibrium microeconomic theory so as to deduce Keynesian macroeconomic behavior from it.

In the succeeding years I approached this project from two angles. One was to fiddle with general equilibrium theory in the hope of introducing money into it in a convincing and unified way. The other was to rewrite as much as possible of Keynesian macroeconomics in a form compatible with competitive general equilibrium.
This latter project came to fruition first as a close collaboration with Miguel Sidrauski, and resulted in a book Monetary and Fiscal Policy in a Growing Economy (Foley and Sidrauski, 1971)... Our joint work... sought to develop a canonical model with which it would be possible to analyze the classical problems of the impact of government policy on the path of output of an economy... Following my notion that the price of capital goods are determined in asset markets, and the flow of new investment adjusts to make the marginal cost of investment equal to that price, we assumed a two-sector production system, so that there would be a rising marginal cost of investment. The asset equilibrium of the model is a generalization of Sidrauski's (and Tobin's) portfolio demand theory, which in turn is a generalization of Keynes' theory of liquidity preference. One of my chief goals was to sort out rigorously and explicitly the relation between stock and flow variables, so that we analyzed the model as a system of differential equations in continuous time, a setting in which the difference between stock and flow concepts is highlighted. At each instant asset market clearing of money, bonds, and capital markets in stocks together with labor and consumption good flow market clearing determine the price of capital, the interest rate, the price level, income, consumption and investment. Government policies determining the evolution of supplies of money and bonds together with the addition of investment flows to the capital stock move the model through time in a transparent trajectory. The book considers the comparative statics and dynamics of this model in detail...

Monetary and Fiscal Policy in a Growing Economy had a mixed reception... The fact that we did not derive the asset and consumption demands of households from explicit intertemporal expected utility maximization turned out to be an unfashionable choice for the 1970s, when the economics profession was persuaded to put an immense premium on models of "full rationality." Sidrauski and I were quite aware of the possibility of such a model, which would have been a generalization of his thesis work. At a conference at the University of Chicago in 1968, David Nissen presented a perfect foresight macroeconomic model that made clear that this path would lead directly back to the Walrasian general equilibrium results. Since I didn't believe in the relevance of that path to the understanding of real macroeconomic phenomena, I thought the main point in exploring this line of reasoning was to show how unrealistic its results were...

The project of a macroeconomic theory distinct from Walrasian general equilibrium theory rests heavily on the distinction between stock and flow equilibrium. In Keynes' vision, asset holders are forced to value existing and prospective assets speculatively without a full knowledge of the future. Our model represented this moment through the clearance of asset markets. In the Walrasian vision this distinction is dissolved through the imaginary device of clearing futures and contingency markets which establish flow prices that imply asset prices. The moral of Sidrauski's and my work is that some break with the full Walrasian system along temporary equilibrium lines is necessary as a foundation for a distinct macroeconomics. Once the implications of the stock-flow distinction in macroeconomics became clear, however, the temptation to finesse them by retreating to the Walrasian paradigm under the slogan of "rational expectations" became overwhelming to the American economics profession....

In my view, the rational expectations assumption which Lucas and Sargent put forward to "close" the Keynesian model, was only a disguised form of the assumption of the existence of complete futures and contingencies markets. When one unpacked the "expectations" language of the rational expectations literature, it turned out that these models assumed that agents formed expectations of futures and contingency prices that were consistent with the aggregate plans being made, and hence were in fact competitive general equilibrium prices in a model of complete futures and contingency markets. Arrow and Debreu had made the assumption of the existence of complete futures and contingency markets to give their version of the Walrasian model the appearance of coping with the real-world problems posed by the uncertainty of the future. To my mind, the rational expectations approach amounted to making the perfect-foresight assumptions that I had already considered and rejected on grounds of unrealism in the course of working with Sidrauski... What the profession took to be an exciting breakthrough in economic theory I saw as a boring and predictable retracing of an already discredited path.
To my mind the most appealing feature of the Foley-Sidrauski approach to microfoundations is that it allows for the possibility that individuals make mutually inconsistent plans based on heterogeneous beliefs about the future. This is what the rational expectations hypothesis rules out. Auxillary assumptions such as sticky prices must then be imposed in order to make the models more consonant with empirical observation.

In contrast, the notion of temporary equilibrium (introduced by John Hicks) allows for the clearing of asset markets despite mutually inconsistent intertemporal plans. As time elapses and these inconsistencies are revealed, dynamic adjustments are made that affect prices and production. There is no presumption that such a process must converge to anything resembling a rational expectations equilibrium, although there are circumstances under which it might. The contemporary literature closest to this vision of the economy is based on the dynamics of learning, and this dates back at least to Marcet and Sargent (1989) and Howitt (1992), with more recent contributions by Evans and Honkapohja (2001) and Eusepi and Preston (2008). I am not by any means an insider to this literature but my instincts tell me that it is a promising direction in which to proceed.

---

Update (11/20). Nick Rowe (in a comment) directs us to an earlier post of his in which the importance of allowing for mutually inconsistent intertemporal plans is discussed. He too argues for an explicit analysis of the dynamic adjustment process that resolves these inconsistencies as they appear through time. It's a good post, and makes the point with clarity.

Some of the comments on Nick's post reflect the view that explicit consideration of disequilibrium dynamics is unnecessary since they are known to converge to rational expectations in some models. My own view is that a lot more work needs to be done on learning before this sanguine claim can be said to have theoretical support. Furthermore, local stability of a rational expectations equilibrium in a linearized system does not tell us very much about the global properties of the original (nonlinear) system, since it leaves open the possibility of corridor stability: instability in the face of large but not small perturbations. (Tobin made a similar point in a paper that I have discussed previously here.)

---

Update (12/13). Mark Thoma and Leigh Caldwell have both posted interesting reactions to this. There's clearly a lot more to be said on the topic but for the moment I'll just link without further comment.

Wednesday, November 17, 2010

Herbert Scarf's 1964 Lectures: An Eyewitness Account

In the fourth volume of The Makers of Modern Economics is a fascinating autobiographical essay by Duncan Foley that traces the arc of his career as an economist and reflects upon developments in the discipline over the past four decades. Duncan describes his first exposure to economics at Swarthmore, his interactions with Tobin as a graduate student at Yale, the introduction in his doctoral dissertation of a concept of equity (now called envy-freeness) that does not depend on interpersonal comparisons of utility, his enormously fruitful collaboration with Miguel Sidrauski at MIT on the microfoundations of macroeconomics, his disillusionment with the rational expectations revolution, and his growing interest in heterodox economics at Stanford and subsequently at Barnard and Columbia.

There's enough material there for several interesting posts, but here I'll confine myself to reproducing Duncan's vivid recollection of a two semester course in mathematical economics taught by Herbert Scarf in 1964 (links added):
After the free pursuit of individual learning fostered by the Swarthmore Honors program, I found the return to traditional classroom teaching at Yale a difficult transition... I was frustrated in these courses not just by the tedium and ineffciency of the class lecture style, but by the tendency for instructors who knew a great deal about the substance and practice of their subjects to waste time rehearsing mathematical and theoretical topics they did not understand very well and often misconstrued...

The great exception to this pattern of misdirected pedagogy was Herbert Scarf's year-long course in Mathematical Economics. Scarf knew this material as well as anyone in the world, and had the gifts of patience, clarity of exposition, and personal charisma to convey it brilliantly and effectively. Scarf's teaching was a revelation to me of what could be accomplished in the classroom, with the appropriate attention to systematic organization, consistently careful preparation, and a judicious balance of lecture and discussion to maintain contact with the level of students' understanding. My notes from this course comprise a better and more complete reference for the topics than any book that has since been published.

The passage of time has revealed that the content of Scarf's course was just as remarkable in its depth and insight as the presentation. Remaining mostly within the realm of finite-dimensional spaces, and emphasizing duality and practical algorithms for the construction of solutions, Scarf gave a thorough tutorial on the mathematics of optimization, starting with linear programming via the simplex method and continuing through Kuhn-Tucker theory, dynamic programming, turnpike theory through Roy Radner's algorithmic approach, and integer programming. Since a huge proportion of economic models boil down to an optimization problem, this survey effectively unified and clarified an immense range of economics for the student. When Peter Diamond was working with James Mirrlees on the problem of optimal taxation (Diamond and Mirrlees, 1971a,b), for example, Scarf's approach helped me to grasp the relation between the complexity of their comparative statics results and the nonconvex structure of the constraint set (the intersection of the set of allocations that are resource and technology-feasible and those that can be supported by distorting taxes) in this problem. The study of these formal problems also convinced me that most economic theory depends on strong assumptions of convexity to assure the tractability of the resulting optimization problem, and that in situations where convexity is inherently absent or implausible it is very difficult to make much progress by traditional methods.

Scarf's course continued with a systematic review of general equilibrium theory, starting from the separating hyperplane approach to the Second Welfare Theorem, and including Gérard Debreu's proof (1959) of existence of a competitive equilibrium, the first presentation of Scarf's algorithmic approach to the calculation of competitive equilibria (1973), the theory of the core and its asymptotic equivalence to competitive equilibrium, and Scarf's own crucial counterexamples to the stability of competitive equilibrium under tâtonnement dynamics with more than two commodities (1960). The critical lesson Scarf emphasized in this discussion was the fact that the competitive equilibrium cannot, except in special cases such as representative agent economies, be represented as the solution of a mathematical programming problem. In other words, the Walrasian system does not generally admit a potential function. As a corollary to this observation we see that the comparative statics of competitive general equilibrium theory inherently lacks the organizing structure of convex programming, so that, for example, equilibrium prices are not in general monotonic functions of endowments. These observations planted the seeds in my mind of what grew to be grave doubts about the Walrasian system. These doubts do not focus on the logical consistency of the system, but on its adequacy as a useful representation of real economic relations...

In retrospect we can see that Scarf's course mapped out the whole development of high economic theory for the next twenty or twenty-five years. The theoretical literature of this period has largely been concerned with generalizing the concepts he taught to more sophisticated commodity spaces (such as infinite-dimensional spaces and spaces of stochastic processes), and rediscovering the general properties and limitations of competitive equilibrium theory in these contexts. This has been a source of both wonder and concern to me. I am amazed at how prescient a mind like Scarf's can be about the future development of a field, guided purely by superb mathematical instincts. But what does this imply about the theoretical fertility of economics during this period? If the core theoretical ideas that have dominated the field since were all present in the Yale classroom in 1964, it suggests that economic theory has been in a scholastic, formalistic phase of development during this period, primarily focusing on working out increasingly esoteric implications of well-established concepts.
Duncan tells me that he still has his notes from this course and that Scarf, who recently retired from teaching, remains full of vigor.

In subsequent posts I hope to discuss Duncan's reflections on the microfoundations of macroeconomics, his work with Sidrauski, his concern that the rational expectations revolution was a step backwards in the development of the theory, and his view that "some break with the full Walrasian system along temporary equilibrium lines is necessary as a foundation for a distinct macroeconomics." (The Hicksian concept of temporary equilibrium allows for asset market clearing in the face of heterogeneous beliefs and mutually inconsistent intertemporal plans.) These are themes that I have touched upon in previous posts and would like to revisit soon. In the meantime, let me repeat my plea to the fellows of the Economteric Society to nominate Duncan for election to their ranks.

---

Update (11/18). Glenn Loury writes in to say:
I never had much interaction with Scarf, but his pedagogic virtuosity and mastery of mathematical economics circa 1970 reminds me of... Stanley Reiter, whom I encountered as a raw assistant professor at Northwestern in the 1970s. Stan, a close friend and occasional collaborator with Leo Hurwicz, was director of the Math Center at Northwestern (forerunner of MEDS), and in the late 1970s had a huge impact on young scholars like Paul Milgrom, Bengt Holmstrom, Mark Satterthwaite and Roger Myerson...
I don't think I agree with the claim that much of "high economic theory" since the 60s has been dotting "i's" and crossing "t's". That was true through the mid-seventies, perhaps, but the asymmetric information, mechanism design, incomplete contract theory revolutions (Hurwicz/Myerson/Maskin, eg.) -- and the emergence of deeply insightful applied theory in a variety of fields from labor and I/O to money, finance and trade suggest otherwise to me.
I basically agree with Glenn on this latter point but, in Duncan's defense, the focus of his essay was on the microfoundations of macroeconomics and the futility of simply aggregating the Walrasian system. And on this dimension I think that progress has been limited at best.

---

Update (11/18). A wonderful comment by Jonathan Conning:
I too sat in Herb Scarf's Yale Micro Theory classroom and still remember the stunned awe that I and my classmates felt at the end of his first lecture with us, which happened to be on the simplex algorithm.
My only regret is that that semester at Yale (1990) we only got a handful of micro lectures from Scarf and so did not get the full "systematic review of general equilibrium theory" that Foley mentions.
I have little to say to improve on Duncan's glowing description of a Scarf lecture except to note that by 1990 the Hillhouse basement classroom had smooth sliding blackboards (which I do not imagine they had in 1964). This meant that there were always three blackboards in use, as he could fill one blackboard full of equations and slide it to conceal or reveal what had been written before. One of the things I recall most vividly is how artfully and efficiently Scarf used those boards, and how rarely he used the eraser. A lecture which might have started with definitions and theory that might have taken a detour through an expertly chosen example to reinforce intuition would in the end always return, with the smoothest glide of a hand to reveal again exactly the right portion of the board to bring the lecture full circle back to the climactic point he wanted. Everything seemed expertly choreographed and timed down to the very last second.
I hope that other former students of Scarf will somehow stumble upon this post.

Monday, October 11, 2010

Glenn Loury on Peter Diamond

Glenn Loury has kindly forwarded me a letter he wrote earlier this year in appreciation of Peter Diamond, one of the co-recipients of this year's Nobel Memorial Prize in Economics. The tribute was written for the occasion of Diamond's retirement, and seems worth publishing today:
April 20, 2010
Prof. James Poterba, Chair
Department of Economics
Massachusetts Institute of Technology

Dear Jim:

It is a pleasure to contribute a brief note of tribute to Peter Diamond, on this occasion of celebration for his work as scholar and teacher.

Peter was an inspiration and role model for me during my student years at MIT. My encounters with him -- in the classroom and in his office -- left an indelible impression. I recall going over to the Dewey Library shortly after arriving in Cambridge, in the summer of 1972, and digging out Peter's doctoral dissertation. This was a mistake! Peter's reputation as a powerful theorist had been noted by my undergraduate teachers at Northwestern. I wanted to see how this reputed superstar had gotten his start. Just how good could it be, I wondered? I had no idea! What I discovered was an elegant, profound and exquisitely argued axiomatic treatment of the general problem of representing consumption preferences over an infinite time horizon, extending results obtained by his undergraduate teacher and the future Nobel Laureate, Tjallings Koopmans.

I prided myself on being a budding mathematician in those years. Yet, Peter's effortless mastery in that dissertation of the relevant techniques from topology and functional analysis, and his successful application of those methods to a problem of fundamental importance in economic theory -- all accomplished by age 23, younger than I was at the moment I held his thesis binder in my hands! – was simply stunning. This set what seem to me then, and still seems so now, to be an unapproachable standard. I was depressed for weeks thereafter!

Even more depressing was what I discovered as I got to know Peter better over the course of my first two years in the program: that mathematical technique was not even his strongest suit! An unerring sense of what constitute the foundational theoretical questions in economic science, and a rare creative gift of being able to imagine just the right formal framework in the context of which such questions can be posed and answered with generality -- this, I came to understand, is what Peter Diamond was really good at.

And so, I learned from him in those years what turned out to be the most important lesson of my graduate educational experience -- that, in the doing of economic theory and relative to the behavioral significance of the issue under investigation, technique is always a matter of secondary importance -- neither necessary nor sufficient for the production of lasting insights. I learned this from the careful study of Peter's seminal contributions to growth theory, the theories of taxation and social insurance, the theories of choice under uncertainty and the allocation of risk-bearing, the theories of legal rules and institutions, and the theory of unemployment. I also learned this from Peter's elegant and comprehensive lectures on the work in these areas of himself and that of other scholars. And so I came -- slowly and fitfully, because I was rather attached to the joys of doing mathematics for its own sake -- to see the world the way that Peter Diamond saw it. And, in the process, I became a much better economist.

Peter graciously agreed to be the second reader on my dissertation, even though I was writing outside of his areas of specialization at the time, and my intellectual indebtedness to him only increased over the course of my last two years at MIT. It has by now become rather clear that I shall never be able to discharge that debt.

So, thanks Peter, for your extraordinary generosity as a teacher, and for your unmatched example as a scholar.

Glenn C. Loury
Merton P. Stoltz Professor of the Social Sciences
Professor of Economics and of Public Policy
Brown University
The following passage from the letter is worth repeating:
And so, I learned from him in those years what turned out to be the most important lesson of my graduate educational experience -- that, in the doing of economic theory and relative to the behavioral significance of the issue under investigation, technique is always a matter of secondary importance -- neither necessary nor sufficient for the production of lasting insights.
I have had very little time for blogging recently, thanks to two new courses, but if I can find the time I'd like to write a post on Diamond's classic 1982 paper on search, and the wonderful coconut parable he used in order to illuminate the theory.

Tuesday, October 05, 2010

Hot Potatoes

RT Leuchtkafer follows up on his earlier remarks with a comment in the Financial Times:
After a detailed four-month review of the flash crash, looking at market data streams tick-by-tick and down to the millisecond, the SEC concluded that a single order in the e-mini S&P 500 futures market ignited an inferno of panic selling. It was over in about seven minutes, and $1,000bn was up in smoke.
Within hours of the SEC’s report, the CME Group, owner of the Chicago Mercantile Exchange, issued a statement to point out that the suspect e-mini order was entirely legitimate, that it came from an institutional asset manager (that is, the public), and was little more than 1 per cent of the e-mini’s daily volume and less than 9 per cent of e-mini volume during and immediately after the crash.
How did this small bit of total volume cause such a conflagration?
You do it with computers. Specifically, you do it with unregulated computers. You pay rent so your machines sit inside the exchanges, minimising travel time for your electrons. You pay licence fees so your computers eat their fill of super-fast proprietary data feeds, data containing a shocking amount of information on everyone’s orders, not just on your own.
And when your computers spot trouble, such as a larger than expected sell-off, they dump inventory and they shut down – because they can.
No one knows what a “larger than expected sell-off” might be, but on May 6 a single hedge that added just an extra 9 per cent of selling pressure was enough to cause chaos.
When that happened, the SEC’s report says, high-frequency traders “stopped providing liquidity and began to take liquidity”, starting a frenzied race for anyone willing to buy. The report likened the panic to a downward-spiralling game of “hot potato” where, as HFT firms bought beyond their risk limits, they pulled their own bids and frantically sold to anyone they could, which were often just other HFT firms, who themselves quickly reached their risk limits and tried to sell to anyone they could, and so on – into the abyss. Fratricide ruled the day. Firms then fled the market altogether, accelerating the sell-off.
Punch drunk, markets rebounded when other market participants realised what had just happened and jumped into the market to buy.
Fair enough, some might say. Markets do panic, and sometimes for no reason. But the larger HFT firms register as formal marketmakers, receiving a variety of regulatory advantages, including greater leverage. All of this extends their enormous reach and power. In the past, they fulfilled certain obligations and observed certain restraints as a quid pro quo for those advantages, a quid pro quo intended to keep them in the market when markets were under stress and to prevent them from adding to that stress. Over the past few years, however, decades-long obligations and restraints all but disappeared, while many advantages stayed.
Computing power also opened marketmaking to a field of unregistered, or informal, high-frequency marketmakers, what investor and commentator Paul Kedrosky termed the “shadow liquidity system”. Exchanges will pay you to do it, too, just as they pay formal marketmakers, and require little in return.
The result is a loose confederation of unregulated, or lightly regulated, high-frequency marketmakers. They feed on what many consider confidential order information, play hot potato in volatile markets, and then instantly change the game to hide-and-seek if even a single hedge hits an unseen and unknowable tipping point.
The only quibble I have with this analysis is that too many different classes of algorithmic trading strategies are being bundled together under the HFT banner. In particular I would like to see a distinction made between directional strategies that are based on predicted short term price movements, and arbitrage based strategies that exploit price differentials across assets and markets. Both of these can be implemented with algorithms, rely on rapid responses to incoming market data, and involve very short holding periods. But they have completely different implications for asset price volatility. It is the mix of strategies rather than the method of their implementation that is the key determinant of market stability.

---

Update: Leuchtkafer writes in to say:
I should have been clear in the piece I was talking specifically about market making strategies. 
I appreciate the clarification, and agree with his characterization of the new market makers

Friday, October 01, 2010

RT Leuchtkafer on the Flash Crash Report

The long-awaited CFTC-SEC report on the flash crash has finally been released. I'm still working my way through it, and hope to respond in due course. In the meantime, here is an email (posted with permission) from the very interesting RT Leuchtkafer, whose thoughts on recent changes in market microstructure have been discussed at some length previously on this blog:
It's natural for any critic to focus on what he wants in the report, and I'm no different.

From the report, in the futures market: "HFTs stopped providing liquidity and instead began to take liquidity." (report pp 14-15); "...the combined selling pressure from the Sell Algorithm, HFT's and other traders drove the price of the E-Mini down..." (report p 15)

And in the equities market: "In general, however, it appears that the 17 HFT firms traded with the price trend on May 6 and, on both an absolute and net basis, removed significant buy liquidity from the public quoting markets during the downturn..." (report p 48); "Our investigation to date reveals that the largest and most erratic price moves observed on May 6 were caused by withdrawals of liquidity and the subsequent execution of trades at stub quotes." (p 79)

It's also natural - if ungraceful - for a critic to say "I told you so." OK, I'm no ballerina, and I told you so (April 16, 2010):

"When markets are in equilibrium these new participants increase available liquidity and tighten spreads. When markets face liquidity demands these new participants increase spreads and price volatility and savage investor confidence."

"...[HFT] firms are free to trade as aggressively or passively as they like or to disappear from the market altogether."

"...[HFT firms] remove liquidity by pulling their quotes and fire off marketable orders and become liquidity demanders. With no restraint on their behavior they have a significant effect on prices and volatility....they cartwheel from being liquidity suppliers to liquidity demanders as their models rebalance. This sometimes rapid rebalancing sent volatility to unprecedented highs during the financial crisis and contributed to the chaos of the last two years. By definition this kind of trading causes volatility when markets are under stress."

"Imagine a stock under stress from sellers such was the case in the fall of 2008. There is a sell imbalance unfolding over some period of time. Any HFT market making firm is being hit repeatedly and ends up long the stock and wants to readjust its position. The firm times its entrance into the market as an aggressive seller and then cancels its bid and starts selling its inventory, exacerbating the stock's decline."

"So in exchange for the short-term liquidity HFT firms provide, and provide only when they are in equilibrium (however they define it), the public pays the price of the volatility they create and the illiquidity they cause while they rebalance."

Finally, the report should put paid to the notion that HFT firms are simple liquidity providers and that they don't withdraw in volatile markets, claims that have been floating around for quite a while.

What happens next?
In a follow-up message, Leuchtkafer adds: 
I'd like to note there were many other critics who got it right, including (most importantly) Senator Kaufman, Themis Trading, David Weild, and others. They all deserve a shout out.
To this list I would add Paul Kedrosky.
Firms that began to "take liquidity" during the crash would have suffered significant losses were it not for the fact that many of their trades were subsequently broken. I have argued repeatedly that this cancellation of trades was a mistake, not simply on fairness grounds but also from the perspective of market stability:
By canceling trades, the exchanges reversed a redistribution of wealth that would have altered the composition of strategies in the trading population. I'm sure that many retail investors whose stop loss orders were executed at prices far below anticipated levels were relieved. But the preponderance of short sales among trades at the lowest prices and the fact that aberrant price behavior also occurred on the upside suggests to me that the largest beneficiaries of the cancellation were proprietary trading firms making directional bets based on rapid responses to incoming market data. The widespread cancellation of trades following the crash served as an implicit subsidy to such strategies and, from the perspective of market stability, is likely to prove counter-productive. 
The report does appear to confirm that some of the major beneficiaries of the decision to cancel trades were algorithmic trading outfits. But I need to read it more closely before offering further comment. 

Saturday, September 04, 2010

Economic Consequences of Speculative Side Bets

The following column was written jointly with Yeon-Koo Che and is crossposted from Vox EU with minor edits and links to references.
---
There is arguably no class of financial transactions that has attracted more impassioned commentary over the past couple of years than naked credit default swaps. Robert Waldmann has equated such contracts with financial arson, Wolfgang Münchau with bank robberies, and Yves Smith with casino gambling. George Soros argues that they facilitate bear raids, as does Richard Portes who wants them banned altogether, and Willem Buiter considers them to be a prime example of harmful finance. In sharp contrast, John Carney believes that any attempt to prohibit such contracts would crush credit markets, Felix Salmon thinks that they benefit distressed debtors, and Sam Jones argues that they smooth out the cost of borrowing over time, thus reducing interest rate volatility.
One reason for the continuing controversy is that arguments for and against such contracts have been expressed informally, without the benefit of a common analytical framework within which the economic consequences of their use can be carefully examined. Since naked credit default swaps necessarily have a long and a short side and the aggregate payoff nets to zero, it is not immediately apparent why their existence should have any effect at all on the availability and terms of financing or the likelihood of default. And even if such effects do exist, it is not clear what form and direction they take, or the implications they have for the allocation of a society's productive resources.
In a recent paper we have attempted to develop a framework within which such questions can be addressed, and to provide some preliminary answers. We argue that the existence of naked credit default swaps has significant effects on the terms of financing, the likelihood of default, and the size and composition of investment expenditures. And we identify three mechanisms through which these broader consequences of speculative side bets arise: collateral effects, rollover risk, and project choice.
A fundamental (and somewhat unorthodox) assumption underlying our analysis is that the heterogeneity of investor beliefs about the future revenues of a borrower is due not simply to differences in information, but also to differences in the interpretation of information. Individuals receiving the same information can come to different judgments about the meaning of the data. They can therefore agree to disagree about the likelihood of default, interpreting such disagreement as arising from different models rather than different information. As in prior work by John Geanakoplos on the leverage cycle, this allows us to speak of a range of optimism among investors, where the most optimistic do not interpret the pessimism of others as being particularly informative. We believe that this kind of disagreement is a fundamental driver of speculation in the real world.
When credit default swaps are unavailable, the investors with the most optimistic beliefs about the future revenues of a borrower are natural lenders: they are the ones who will part with their funds on terms most favorable to the borrower. The interest rate then depends on the beliefs of the threshold investor, who in turn is determined by the size of the borrowing requirement. The larger the borrowing requirement, the more pessimistic this threshold investor will be (since the size of the group of lenders has to be larger in order for the borrowing requirement to be met). Those more optimistic than this investor will lend, while the rest find other uses for their cash.
Now consider the effects of allowing for naked credit default swaps. Those who are most pessimistic about the future prospects of the borrower will be inclined to buy naked protection, while those most optimistic will be willing to sell it. However, pessimists also need to worry about counterparty risk - if the optimists write too many contracts they may be unable to meet their obligations in the event that a default does occur, an event that the pessimists consider to be likely. Hence the optimists have to support their positions with collateral, which they do by diverting funds that would have gone to borrowers in the absence of derivatives. The borrowing requirement must then be met by appealing to a different class of investors, who are neither so optimistic that they wish to sell protection, nor so pessimistic that they wish to buy it. The threshold investor is now clearly more pessimistic than in the absence of derivatives, and the terms of financing are accordingly shifted against the borrower. As a result, for any given borrowing requirement, the bond issue is larger and the price of bonds accordingly lower when investors are permitted to purchase naked credit default swaps.
This effect does not arise if credit default swaps can only be purchased by holders of the underlying security. In fact, it can be shown that allowing for only “covered” credit default swaps has much the same consequences as allowing optimists to buy debt on margin: it leads to higher bond prices, a smaller issue size for any given borrowing requirement, and a lower likelihood of eventual default. While optimists take a long position in the debt by selling such contracts, they facilitate the purchase of bonds by more pessimistic investors by absorbing much of the credit risk. In contrast with the case of naked credit default swaps, therefore, the terms of lending are shifted in favor of the borrower. The difference arises because pessimists can enter directional positions on default in one case but not the other.
While this simple model sheds some light on the manner in which the terms of financing can be affected by the availability of credit derivatives, it does not deal with one of the major objections to such contracts: the possibility of self-fulfilling bear raids. To address this issue it is necessary to allow for a mismatch between the maturity of debt and the life of the borrower. This raises the possibility that a borrower who is unable to meet contractual obligations because of a revenue shortfall can roll over the residual debt, thereby deferring payment into the future.
As many economists have previously observed, multiple self-fulfilling paths arise naturally in this setting (see, for instance, Calvo, Cole and Kehoe, and Cohen and Portes). If investors are confident that debt can be rolled over in the future they will accept lower rates of interest on current lending, which in turn implies reduced future obligations and allows the debt to be rolled over with greater ease. But if investors suspect that refinancing may not be available in certain states, they demand greater interest rates on current debt, resulting in larger future obligations and an inability to refinance if the revenue shortfall is large.
A key question then is the following: how does the availability of naked credit default swaps affect the range of borrowing requirements for which pessimistic paths (with significant rollover risk) exist? And conditional on the selection of such a path, how are the terms of borrowing affected by the presence of these credit derivatives?
For reasons that are already clear from the baseline model, we find that pessimistic paths involve more punitive terms for the borrower when naked credit default swaps are present than when they are not. More interestingly, we find that there is a range of borrowing requirements for which a pessimistic path exists if and only if such contracts are allowed. That is, there exist conditions under which fears about the ability of the borrower to repay debt can be self-fulfilling only in the presence of credit derivatives. It is in this precise sense that the possibility of self-fulfilling bear raids can be said to arise when the use of such derivatives is unrestricted.
The finding that borrowers can more easily raise funds and obtain better terms when the use of credit derivatives is restricted does not necessarily imply that such restrictions are desirable from a policy perspective. A shift in terms against borrowers will generally reduce the number of projects that are funded, but some of these ought not to have been funded in the first place. Hence the efficiency effects of a ban are ambiguous. However, such a shift in terms against borrowers can also have a more subtle effect with respect to project choice: it can tilt managerial incentives towards the selection of riskier projects with lower expected returns. This happens because a larger debt obligation makes projects with greater upside potential more attractive to the firm, as more of the downside risk is absorbed by creditors.
The central message of our work is that the existence of zero sum side bets on default has major economic repercussions. These contracts induce investors who are optimistic about the future revenues of borrowers, and would therefore be natural purchasers of debt, to sell credit protection instead. This diverts their capital away from potential borrowers and channels it into collateral to support speculative positions. As a consequence, the marginal bond buyer is less optimistic about the borrower's prospects, and demands a higher interest rate in order to lend. This can result in an increased likelihood of default, and the emergence of self-fulfilling paths in which firms are unable to rollover their debt, even when such trajectories would not arise in the absence of credit derivatives. And it can influence the project choices of firms, leading not only to lower levels of investment overall but also in some cases to the selection of riskier ventures with lower expected returns.
James Tobin (1984) once observed that the advantages of greater “liquidity and negotiability of financial instruments” come at the cost of facilitating speculation, and that greater market completeness under such conditions could reduce the functional efficiency of the financial system, namely its ability to facilitate “the mobilization of saving for investments in physical and human capital... and the allocation of saving to their more socially productive uses.” Based on our analysis, one could make the case that naked credit default swaps are a case in point.
This conclusion, however, is subject to the caveat that there exist conditions under which the presence of such contracts can prevent the funding of inefficient projects. Furthermore, an outright ban may be infeasible in practice due to the emergence of close substitutes through financial engineering. Even so, it is important to recognize that the proliferation of speculative side bets can have significant effects on economic fundamentals such as the terms of financing, the patterns of project selection, and the incidence of corporate and sovereign default.