Friday, June 10, 2011

What is the value of research?

What is the value of the research we do? The typical way we have to evaluate the impact of research is to count citations, and possibly weigh them in some way, in Economics and any other sciences (except maybe where patents are relevant). But this only evaluates how the research output is viewed within a narrowly defined scientific community. The contribution to social welfare is an entirely different beast to evaluate.

Robert Hofmeister tries to give research some value. The approach is to consider the scientific process through cohorts, where each wave provides fundamental research as well as end-applications based on previous fundamental research. A particular research results thus can have a return over many generations. It is an interesting way to properly attribute the intellectual source of a new product or process, but the exercise is of little value if it is not possible to quantify the social value of the end-application. Indeed, Hofmeister goes back to using citations in Economics for a data application, which is equivalent to evaluate research only within the scientific community. In terms of the stated goal of the paper, we are back to square one. In terms of getting a better measure of citation impact, this is an interesting application of an old idea. And the resulting rankings of journals and articles look very much like those that are already available.

Thursday, June 9, 2011

The high welfare cost of small information failures

Are stock markets efficient in the sense that stock prices reflect all available information? This question has preoccupied finance lately as many have started to doubt the efficient market hypothesis during the latest crisis. One critical aspect of this is whether current tests of the hypothesis actually give an accurate picture, and if not whether this matters in a significant way.

Tarek Hassan and Thomas Mertens
claim that it is possible for stock markets to aggregate information properly, that small errors at the household level can accumulate and amplify if these errors are correlated, and that the welfare consequences can be substantial even if the initial errors were small. This cost emerges for a portfolio misallocation due to the higher volatility of stock prices. To get to such a result, they take a standard real business cycle model, add to it that households get a noisy private signal about future total factor productivity. They then look at the stock market for additional information to form expectations. If you allow households to be on average more optimistic than rationality in some state, and more pessimistic in others, you get the above results. Interestingly, Hassan and Mertens show that households face little incentives to correct individually for these small common errors (0.01% of average consumption), but collectively the consequences are large (2.4%). Talk about an amplification.

Wednesday, June 8, 2011

Pollution has an impact on worker productivity

Pollution regulation is typically cast as a game between citizens and firms, the first suffering the consequences of pollution while the second are the origin of the pollution. In such a case, there is no incentive for firms to abate pollution, and the government has to mediate. But could a case be made that firms should be willing, individually or collectively, to reduce pollution. One way can be green labeling, which could increase the demand for their products. Another would be if firms realize pollution has an impact on their on productivity or on the labor supply.

Joshua Graff Zivin and Matthew Neidell take the worker productivity angle by using a dataset of dairy farm workers from a large farm in the Central Valley of California. In particular, they look how ozone levels impact the output of piece rate workers. At it is substantial. For example, a 10 ppb reduction of ozone increases productivity by 4.2%, noting that the standard deviation of ozone levels is 13 ppb. And if you object that some of the workers fall under minimum wage law and may not exert the right effort, be reassured, the authors took that into account. In addition, this impact happens even when the ozone level is well below the current national standards. Realizing this, industry should be more willing to accept the suggested tightening of pollution standards for ozone, and for nitrogen oxides and volatile organic chemicals that are the source of ground-level ozone.

Tuesday, June 7, 2011

Does it make sense to subsidize biofuels?

Ina relatively short time, biofuels have become remarkably popular, especially as an additive to regular petroleum based fuel. This is at least in part due to massive subsidies from the US to fuel and corn producers. As biofuels compete with food, this has lead to major price increases for corn and sugar, with adverse consequences for importing countries. This begs the question: is it actually a good idea to subsidize biofuels? I mentioned previously that it is preferable to tax other energy products rather than subsidize alternative energies (1, 2), but let us revisit this issue.

Subhayu Bandyopadhyay, Sumon Bhaumik and Howard Wall use a general equilibrium trade model and confirm that if there is a Pigovian tax on conventional fuels, subsidies are not needed. But if the Pigovian tax is not available or too low (as is the case in the US), then a subsidy for biofuels makes sense, But if the country in question is large, there are other implications through increased worldwide demand for food. In that case, a food exporter wants to subsidize biofuels and tax conventional fuels. A food importing country would only want to subsidize biofuels if the pollution reduction effect is large enough.

Hector Nuñez, Hayri Önal, Madhu Khanna, Xiaoguang Chen and Haixiao Huang look more specifically at the interaction of policies in the US and Brazil, the two largest producers of biofuels. Indeed, the US imposes a special tariff on the importation of biofuels, in particular the more advanced sugarcane based one from Brazil. Brazil is also the largest producer and exporter of beef. The paper uses a multi-country, multi-good model, unfortunately with a partial equilibrium, but it takes into account possible crop rotations and different categories of land. It concludes that eliminating the tariffs would significantly reduce biofuel production in the US, with the latter importing biofuels from Brazil and exporting corn. While this reduces producer welfare compared to the status quo, it increases consumer welfare. Given the political system in the US, guess what will happen.

Monday, June 6, 2011

Shortsightedness and tariffs

International trade theory is in large part about optimal trade theory, yet it is incapable to explain the observed level of tariffs. While under rather general circumstances theory will tell you that zero tariffs will improve general welfare, once you take into account that governments threaten and negotiate in a Nash equilibrium, tariffs should be at about 30%. They are generally far below that. It is a big challenge to explain the difference.

Mario Larch and Wolfgang Lechthaler argue that all that is needed is for trade theory to finally catch up with the rest of economics and use some dynamics. Specifically, transform the problem into a dynamic Nash equilibrium, take into account transition paths, and you get some realistic numbers if you assume that the negotiating politicians are short-sighted, which is certainly not far from the truth. This is important because the various transitional effect of a tariff change take different times. Indeed a decrease in tariffs has a faster and positive impact on consumption through an immediate increase in consumption. A counter-effect through the closing of inefficient firms takes much longer. Impatient politicians discount heavily the latter.

Friday, June 3, 2011

Should voting be compulsory?

Should one force people to vote? While there are clear incentives for people not to vote because it is very unlikely their individual vote would matter, there may be a social benefit to make sure that everyone, or at least many people, votes. Clearly, public decision-making is difficult when people do not voice an opinion. But imagine you are forced to vote, how should you vote? Selfishly, or for the public good? And how should that public good be defined? Your family, the neighborhood, your clan, your country? Indeed, if you force someone to vote, you must have an idea for what purpose you impose this.

Dan Usher tries to make sense of all this focusing on the idea of the duty to vote, the duty being an unenforceable obligation. The paper is impossible to summarize without making a massacre of it, so I will abstain. It is full of ideas on how to think about the duty to vote, abstention, and mandatory voting. Read it if you are interested.

Thursday, June 2, 2011

Seat belts lead to safer driving

A classic example of the law of unintended consequences is how seat belt laws gave reasons to drive more dangerously, as car drivers feel more secure. This idea has been popularized by Sam Peltzman and several follow-up studies.

Yong-Kyun Bae puts some serious doubts in this results by pointing out that all these studies were based on aggregate data. Using individual data, which allows to exploit individual characteristics, as well as the circumstances of accidents. And once you control for these factors and exploit cross-state variations of how seat-belt laws became more or less stringent in the last decade, it appears more stringent laws make people drive more carefully. Indeed, pedestrians are getting safer. If this result stands, the challenge is to explain it: do tougher seat-belt laws signal stronger enforcement of other traffic laws? In particular, as Bae suggests, these laws may come in tandem with cell-phone and texting-while-driving laws.

Wednesday, June 1, 2011

Risk-free rate tax deductions

The Norwegian shareholder tax is rather peculiar in that it allows the deduction of risk-free interest income, thus only taxing the risky portion of capital income. This is rather counter-intuitive, as one usually wants to encourage risk-taking in the form of venture capital or plain entrepreneurship. But the idea in Norway was that this would make financing of firms neutral with respect to the source of funds.

Jan Södersten and Tobias Lindhe argue this line of reasoning is not appropriate for an open economy like Norway and 56% foreign ownership. Indeed, one needs to understand as well who is investing. Indeed, taxes are capitalized differently by different people. Indeed, for an economy that is so open, returns are largely determined on international markets, What is then determinant for Norway is the after-tax return, and this is where new distortion enter the picture: large firms are financed on international markets, and the after-tax rate is set abroad. Small firms that finance themselves domestically have provide similar after-tax returns, but domestic investors face different tax rules than their foreign counterparts. This is where new distortions can enter, and severe under-investment in domestic firms could be the consequence. But for a rather closed economy, this seems a good idea, especially as it is a neat way to prevent under-reporting of income.

Tuesday, May 31, 2011

Minimum wage and tax evasion

While the minimum wage is seen by many as an easy way to prevent poverty, it has its pitfalls. Apart from the fact that it may increase unemployment, it may lower wages and attract more immigrants. Now add to this that minimum wages may decrease disposable income and thus consumption.

Mirco Tonin looks at the case of a country with a significant underground economy. As only declared labor income is taxed, introducing a minimum wages means that everyone must declare at least the minimum wage as income. For at least some people this will reduce disposable income. Now increase that minimum wage, then everyone who was declaring such an income has to pay more taxes. Is this significant? Tonin looks at Hungary where there was a significant increase in the minimum wage in 2001 and it is believed half of those declaring minimum wage on their tax return actually earn more. Looking at the gap between consumption and income on household data, a common way to identify underground income, he finds that the effect on consumption was quite significant for those who were affected by the minimum wage hike.

Monday, May 30, 2011

What is a macroeconomic model?

Critics have had a field day the past couple of years claiming that Economics is not up to the task because its models are too abstract. Macroeconomics has been especially affected because the trigger of the bandwagon was a macroeconomic event, the now Great Recession. I have discussed a few bad attempts at criticism, which were usually bad because ill-informed and because they could not offer any viable alternative.

The latest salvo comes from Hashem Pesaran and Ron Smith. They have several arguments. The first is that macroeconomic modeling of the DSGE brand insists too much on internal consistency and should allow more degrees of freedom to fit the data. Pesaran and Smith should first specify what the goal of the model is before criticizing the approach. If it is short-term forecasting, then go ahead with a purely statistical approach on macro data. If you want policy advice, you need something that withstands the Lucas Critique, and strong micro-foundations is then the way to go. But without a given purpose, any criticism is moot.

Pesaran and Smith, given their track record, are of course strong advocates of purely statistical methods. Throw every possible series in a regression, and see what sticks. I am not saying this cannot be useful, it allows to establish relationships in the data and I regularly report on such results, but this does not allow you to explain things. For this, you need some structure and theory provides you that. This brings me to the title of this post. There appears to be some disagreement about the meaning of the word "model." To me, model is a set of relationships established by theory that can then be used on data, for policy experiments, etc. For Pesaran and Smith, a model is a set of aggregate data series that are used in a statistical analysis of some kind (VAR, non-parametric, etc.). If we cannot agree what we are talking about, of course there will be endless and fruitless discussions.

For example, they are not the first to criticize DSGE models for failing to include housing, finance and the external sector. Well, models (the way I see them) are abstractions, and you do not want to include everything them, or you cannot understand, interpret or do something useful with. It is so across all sciences. You build a model to answer a particular question, and you give it the necessary bells and whistles. The fact that most DSGE models did not include housing and bank liquidity is not a failure of DSGE modelling, it is a failure of recognizing what questions could be important in the future and this is damn hard to do properly.

Pesaran and Smith's solution to what they call the straightjacket of DSGE is to throw all these missing variables in a regression. Essentially, they want to bypass the discipline that theory imposes by letting the data speak. Again this is OK if you want to explore and find relationships, but this is not going to be very useful if you want to explain what is going on. Specifically, they advocate using vector autoregressions (VAR). As they complain that DSGE use representative agents when heterogeneity matters, they call for the use of data from several countries in the VAR, a rather strange argument, but I suppose this is because they are limited to aggregate data (and they neglect all the DSGE models using household level data...). In a statistical sense, the big problem is now that one quickly runs out degrees of freedom, as one has only so many time periods, and every additional variable eats degrees of freedom at a quadratic rate (times the number of lags). The other problem is that interpreting the resulting errors ("shocks") becomes difficult. One is then limited to vague notions like demand and supply shocks, much like in factor analysis. But at least Pesaran and Smith acknowledge that theory can be useful in selecting, say, long-term restrictions.

PS: I gave much of the same arguments in the discussion of a paper by David Hendry. Pesaran appears to be more knowledgeable of DSGE and is more willing to use theory to guide empirics. Unfortunately, Pesaran has the same habit of abusing self-citations, 12 out of 32.

Sunday, May 29, 2011

On the value of liberal arts education

I find a recent opinion article on CNN by Michael Roth, the president of Wesleyan University, on the value of liberal arts education rather upsetting. I can understand that as the president of a liberal arts college he wants to defend this particular type of education. But his arguments ring particularly hollow, and I would have expected better from someone leading on of the best liberal arts colleges.

His selling points are the following. 1) A broadly-based education is better then professional or technical expertise. 2) Liberal arts develop critical thinking and creativity. 3) Focusing on science and engineering is a serious mistake. 4) Effective implementation of new technology requires social and economic understanding. 5) Scholarship in humanities increasingly requires scientists. 6) Flexibility is important on the job market.

I agree that an education can be too narrow. But the US liberal arts way of doing it is a waste. Undergraduate students spend less than two years worth in their chosen major, and most end up being functionally incompetent in their major as they graduate. I realize this is largely due to the fact that high schools failed to give them this broadly-based education as they water down requirements. But there has to be a better way. Send those who have not yet mastered the general education requirements to community colleges, for example.

If the US is the bastion of liberal arts, as Michael Roth claims, then he cannot claim it favors critical thinking. I am continually amazed how US students woefully lack in this regard, with few exceptions of course. They are not interested in what they are studying or the world outside. They are very passive and minimalist students. This is favored by the "anything goes" attitude that liberal arts favor.

The reason why the US is a world economic leader is that it has a scientific and technological edge, and that is has economic policies that provide good incentives, at least better than the rest of the world (and that Americans are obsessed with working). That edge is waning because other countries are catching up on the scientific and technological front and have already surpassed the US in several areas. Michael Roth apparently thinks it is wrong to try to keep that edge, and that one should focus more or social sciences, humanities and fine arts. He got the causation wrong. One can afford this when one is rich, but it does not make you rich.

I think he right on the fourth point. It is useless to engineer better crops if you cannot find a way for people to adopt them. But one does not need more liberal arts majors than scientists to achieve this. His fifth point actually shows liberal arts needs science, so science should not be discouraged.

I also agree with his sixth point. That is why one should have sufficient time to teach not just the recipes of a field, but also where they come from. This allows a student later to come up with new solutions to new problems. But the 3-4 semesters in a major do allow this. The result is that in fields where this is not sufficient students are not competent enough and end up with jobs outside their major and with low pay. Just look what pay is across majors. Liberal arts majors are consistently at the bottom, also due the fact that there are just too many of those students.

No, we should not encourage liberal arts education. This should be done in high school and community colleges. Let universities concentrate on the teaching of the core and produce truly competent professionals.

Friday, May 27, 2011

Medical expenditure and technology growth

People are running scared about the relative increase of health care costs. As discussed recently, there a many potential explanations out there, my preferred one being that service goods that rely heavily on labor are bound to become relatively more expensive than manufactured goods that become cheaper to produce with technological progress. But one may also worry that this progress does the exact opposite for healthcare, if more technology makes it more expensive. Two recent papers look at the connection of technological progress and health care costs.

Justin Polchlopek wants to understand better medical technology and break away from using total factor productivity. Basically, he goes back to good old input-output modelling because he wants to avoid issues with capital aggregation that may matter. New technology in the medical sector is then equivalent to new capabilities in the model. Unless the use of existing capabilities is reduced, it is then obvious that efficiency will be reduced. This means that how new technologies are diffused and how they replace old ones is critical. Add in poorly designed insurance, and you have a recipe for disaster, but it can be largely reversed.

Amitabh Chandra and Jonathan Skinner take a very different approach, focusing on demand and supply. They use a more aggregate health care production function and study what determines health care productivity. They categorize three types of technologies: (I) highly cost-effective with little risk of overuse; (II) highly effective for some people/diseases; (III) uncertain treatments. Of course, focusing on (I) will increase productivity, while (III) is very costly for potentially little effect. The health care costs balloon if patients ask for the latest technology, which often falls into (III). Insurance systems, whether public or private, need to resist accommodating all such requests.

All in all, both papers show that new technology can become very costly if it is mismanaged. This can be corrected with insurance schemes that let patients feel in their pocketbook some of the consequences of their choices. Incentives through the budget constraint remain powerful disciplining devices.

Thursday, May 26, 2011

Higher local sale tax leads to more local retail activity

Many US states are currently tempted to increase sales taxes in order to overcome revenue short falls. While they should really be thinking about introducing a value-added tax instead, in the short run it is useful to study what the consequences of such a tax increase would be, not just on revenue, but also on economic activity and its composition.

Daria Burnes, David Neumark and Michelle White study the impact of higher sales taxes on retail employment at the local level. You think that higher taxes lead retailers to flee, but one should not forget that local authorities have other tools, in particular zoning. They find that in fact that locations with higher sales taxes have more retail employment, and that is because authorities than make greater efforts to attract big retail outlets. The downside is that manufacturing employment is getting crowded out by these efforts.

Wednesday, May 25, 2011

The demand for theater

What determines demand for theater? Theater managers should be interested in understanding their market. Beyond this, this is also important for policy as theater is frequently and substantially subsidized. This the characteristics of those who go to theater and how frequently they do so may help understand whether it is worth subsidizing it. For example, if only rich people go to theater, one could leave the state out and let the public pay higher prices, which substitute for taxes (and would then improve efficiency). If it is mostly poor people who attend theater, then it may be worth subsidizing if there is some sort of positive externality from it.

Concetta Castiglione uses micro data from Italy to find results that are not very surprising: everything is driven by education and income. Rich educated people pay more taxes and get them back in part in theater performances. This is even more pronounced than for higher education, for which forceful arguments have been made that the state should stop subsidizing it.

Of course, all this ignores consideration about the supply. but that does not matter here. Demand should be essentially the same whether theaters are subsidized or not in Italy.

Tuesday, May 24, 2011

Negotiate drug price and availability jointly

Health care costs are increasing faster than general inflation mostly everywhere, and for some time now. While this should not be a surprise, as health care is mostly a service good, there is considerable grief over the situation. Among initiatives to curb down costs are efforts on prevention, instituting copays and regulating health care providers. What about pharmaceutical drugs.

Begona Garcia Marinoso, Izabella Jelovac, and Pau Olivella report on a rather common practice in Europe: external referencing. This is setting a price cap on pharmaceuticals domestically based on what the price is abroad. This has obviously consequences on price negotiations in the foreign country, which is not too happy about this as the pharmaceutical companies are bargaining harder. But if the government can tie in drug authorizations into the negotiations, then prices are further capped and even the foreign country is not hurt. In other words, there is no reason that governments should give away bargaining power by putting price regulation and drug authorization in different agencies.