The MILQs at Spousonomics riff on the subject of “learned incompetence.” It’s the strategic response to comparative advantage in the household:  if I am supposed to specialize in my comparative advantage I am going to make sure to demonstrate that my comparative advantage is in relaxing on the couch. Examples from Spousonomics:

Buying dog food. My husband has the number of the pet food store that delivers and he knows the size of the bag we buy. It would be extremely inconvenient for me to ask him for that number.

Sweeping the patio. He’s way better at getting those little pine tree needles out of the cracks. I don’t know how he does it!

A related syndrome is learned ignorance. It springs from the marital collective decision-making process.  Let’s say we are deciding whether to spend a month in San Diego.  Ideally we should both think it over, weigh and discuss the costs and benefits and come to an agreement. But what’s really going to happen is I am going to say yes without a moment’s reflection and her vote is going to be the pivotal one.

The reason is that, for decisions like this that require unanimity, my vote is only going to count when she is in favor.  Now she loves San Diego, but she doesn’t surf and so she can’t love it nearly as much as me.  So she’s going to put more weight on the costs in her cost-benefit calculation.  I care about costs too but I know that conditional on she being in favor I am certainly in favor too.

Over time spouses come to know who is the marginal decision maker on all kinds of decisions.  Once that happens there is no incentive for the other party to do any meaningful deliberation.  Then all decisions are effectively made unilaterally by the person who is least willing to deviate from the status quo.

 

Why might democracies be less warlike than other regime types?

Two early and related ideas:

Thomas Paine ([3] p. 169): “What inducement has the farmer, while following the plough, to lay aside his peaceful pursuit, and go to war with the farmer of another country?”

Immanuel Kant ([1], p. 122): “if the consent of the subjects is required to determine whether there shall be war or not, nothing is more natural than that they should weigh the matter well, before undertaking such a bad business.”

This idea has influenced policymakers of different political persuasions.  Is there a rational choice/strategic theory for the democratic peace?  This lecture discusses various alternatives.

First, we study the fear motive for war. The median voter might be a coordination type who wants his country’s leader to be dovish against a dovish opponent but aggressive against an aggressive opponent.  This captures the Kant/Paine idea but also Schelling’s idea that aggression might arise out of fear.  These incentives imply that democracies are more responsive to aggression that other regime types.  A second possibility is that a leader can survive with the support of the “mob” (which has the same preferences as the median voter) or with the support of an elite that favors war. The leader loses power if he is weak in the face of aggression (no one supports him) but survives if he is aggressive even when an opponent is not (the hawkish elite support him).  This kind of regime is more aggressive than a dictatorship.  Data support these these comparative statics.

Second, we study the greed motive.  The leader of country may get a disproportionate share of the spoils of war should his country win but not suffer a large cost if it should lose.  He has a bias. If both leaders are biased, it may be impossible to avoid inefficient war even if transfers are possible.  But if both leaders are unbiased then transfers can resolve conflict and may even be unnecessary.

Third, bargaining may devolve into a war of attrition.  A democratically elected leader suffers greater “audience costs” if he backs down.  This makes him a tough bargainer and his opponent correspondingly weak.  A player may even deliberately “talk up” his audience costs to become a tough bargainer.

Here are the slides.

So leads us to the remarkable story of Imperial College’s self-effacing head librarian, pitted in a battle of nerves against the publisher of titles like the Lancet. She is leading Research Libraries UK (RLUK), which represents the libraries of Russell Group universities, in a public campaign to pressure big publishers to end up-front payments, to allow them to pay in sterling and to reduce their subscription fees by 15%. The stakes are high, library staff and services are at risk and if an agreement or an alternative delivery plan is not in place by January 2nd next year, researchers at Imperial and elsewhere will lose access to thousands of journals. But Deborah Shorley is determined to take it to the edge if necessary: “I will not blink.”

The article is here.  Part of what’s at stake is the so called “Big Deal” in which Elsevier bundles all of its academic journals and refuses to sell subscriptions to individual journals (or sells them only at exorbitant prices.)  Edlin and Rubinfeld is a good overview of the law and economics of the Big Deals.

Boater Bow:  Not Exactly Rocket Science.

There was an interview with Tom Waits on the radio last week and I heard him say something that got me thinking.

I like hearing things incorrectly. I think that’s how I get a lot of ideas is by mishearing something.

It happens to me all the time.  It could be when I am half-listening to a lecture or catching a little snippet of a conversation by passersby.  It can even happen when I am listening to an interview on the radio.

There are good reasons why mishearing is a great source of new ideas.

  1. If you hear something already put together, you are prone to give it credence.  Ever notice how someone tells you about something surprising and right away you understand why its true?  Sometimes even before they are done talking?  The kickstarting effect of credence is a valuable scarce resource that is often wasted on the actually true.  Mishearing tricks you into believing something that is probably not true and sets your brain in motion to find something true in it.
  2. Mishearing isn’t random: your brain does its best to make sense of whatever comes in.  Think of the mishearing as some noise coming in and the brain assembling into something useful.
  3. It’s not just noise that comes in.  You are mishearing something that originally made sense.  So most of the parts fit together in some way already.  The mishearing will just turn it around, extend it, or apply it to something new.

So how do you make it happen?  Tom Waits:

I like turning on two radios at the same time and listening to them.

An insightful analysis from John Quiggin at Crooked Timber of the organizational economics of Arab dictatorships.

The element of truth is that the Arab monarchies have good prospects of survival if they can manage the transition to constitutional monarchy. And it makes sense for them to do so. After all, a constitutional monarch gets to live, literally, like a king, without having to worry about boring stuff like budgets and foreign affairs. And, in the modern context, the risk that such a setup will be overthrown by a military coup, as happened to quite a few of the postcolonial constitutional monarchs, is much diminished. By contrast, there’s no such thing as a constitutional dictatorship or tyranny and no way to make the transition from President-for-Life to constitutional monarch. That’s not to say all the monarchs in the region will survive, or for that matter, that all the remaining dictatorships will fall. But the general point is valid enough.

With this corollary for Saudi Arabia

The other big problem is that this can’t easily be done in Saudi Arabia. There are not even the forms of a constitutional government to begin with. Worse, the state is not so much a monarchy as an aristocracy/oligarchy saddled with 7000 members of the House of Saud, and many more of the hangers-on that typify such states. These people have a lot to lose, and nothing to gain, from any move in the direction of democracy.

I mostly read blogs through Google Reader but lately I have been thinking that my selection is becoming a little stale.  I want to mix it up a little bit, what blogs should I be reading?  I like economics blogs but I like/get ideas from all kinds of blogs.  What I am aiming for is a minimal spanning set.

Epilogue:  Thanks for all the great suggestions!  Keep them coming.

This was going to happen eventually.

Una advertencia: el lector desprevenido podrá suponer que el contenido de este artículo es irónico, exagerado o hasta apócrifo. Han sido recurrentes las ironías acerca de los efectos letales de los planes de ajuste que han impulsado e impulsan ciertos economistas. Marcelo Matellanes, el fallecido filósofo y economista, sostenía que a los economistas se les debería exigir, como a los médicos, el juramento hipocrático, pero con un detalle adicional: la mala praxis de los médicos tiene efectos más acotados que los programas de ajuste estructural que algunos economistas han puesto en práctica en las economías latinoamericanas. En otras palabras, los malos médicos matan de a uno; los malos economistas hacen un daño generalizado.

The article, in Spanish obviously, is here.  (Google translate is at your service.) My translation:  two evil economists from the center countries (??) named Sandeep Baliga and Jeffrey Ely have written a paper which demonstrates how to use torture optimally.  They, and all economists for that matter, should report at once for ethical reprogramming.

Gat grope:  Santiago Oliveros.

I don’t have a Kindle but I noticed that people were complaining so much about the absence of page numbers on early versions that Amazon has restored page numbers in the latest Kindle software. This adherence to tradition (in which I include prudish Professors and Editors who demand precise page references in Bibliographies) destroys a unique advantage of eBooks that could make them more than just a fragile, signal-jamming replacement for old fashioned pulp.

Suspense requires randomization. If you are reading my paper-bound novel and I want to maximize your suspense I am constrained by your ability to infer, based on how many pages are left, the likelihood that the story is going to play out as staged or whether there will be another twist in the plot. It is impossible for me to convince you of a “false ending” if you are on page 200 out of 400. The bastard publisher has spoiled it for me because 1) he has, without my permission, smeared page numbers all over my handiwork, and 2) refused to add bulk by randomly insert blank pages at the end to help me fool you.

Now Kindle, and eBook readers in general allow me to shuck that constraint. I can end the novel at any point and you would never know that the end is right around the corner. I could make it 1 page long. Imagine the effect of that! I could make it grind to a halt on page 200 only to surprise you with a development completely out of the blue that takes another 200 pages to resolve.

But no, you can’t handle the suspense. You call yourself a reader but you are really just a page counter. You begged for your time-marking crutch and Amazon obliged. Your loss, my novel goes back in the drawer.

P.S. Emir Kamenica gets some of the blame for this post.

Q.S. Quote from my buddy Dave: The key is to have a useful term–for example, I have stopped using the term “page number” and now use the term “oprah” to refer to the location in printed matter. I encourage you to start using this in the classroom. “All right, please turn to Oprah 31”)

An important role of government is to provide public goods that cannot be provided via private markets. There are many ways to express this view theoretically, a famous one using modern theory is Mailath-Postlewaite.  (Here is a simple exposition.) They consider a public good that potentially benefits many individuals and can be provided at a fixed per-capita cost C.  (So this is a public good whose cost scales proportionally with the size of the population.)

Whatever institution is supposed to supply this public good faces the problem of determining whether the sum of all individual’s values exceeds the cost.  But how do you find out individual’s values?  Without government intervention the best you can do is ask them to put their money where their mouths are.  But this turns out to be hopelessly inefficient.  For example if everybody is expected to pay (at least) an equal share of the cost, then the good will produced only if every single individual has a willingness to pay of at least C.  The probability that happens shrinks to zero exponentially fast as the population grows.  And in fact you can’t do much better than have everyone pay an equal share.

Government can help because it has the power to tax.  We don’t have to rely on voluntary contributions to raise enough to cover the costs of the good. (In the language of mechanism design, the government can violate individual rationality.) But compulsory contributions don’t amount to a free lunch:  if you are forced to pay you have no incentive to truthfully express your true value for the public good.  So government provision of public goods helps with one problem but exacerbates another.  For example if the policy is to tax everyone then nobody gives reliable information about their value and the best government can do is to compare the cost with the expected total value.  This policy is better than nothing but it will often be inefficient since the actual values may be very different.

But government can use hybrid schemes too.  For example, we could pick a representative group in the population and have them make voluntary contributions to the public good, signaling their value.  Then, if enough of them have signaled a high willingness to pay, we produce the good and tax everyone else an equal share of the residual cost.  This way we get some information revelation but not so much that the Mailath Postlewaite conclusion kicks in.

Indeed it is possible to get very close to the ideal mechanism with an extreme version of this.  You set aside a single individual and then ask everyone else to announce their value for the public good.  If the total of these values exceeds the cost you produce the public good and then charge them their Vickrey-Clarke-Groves (VCG) tax.  It is well known that these taxes provide incentives for truthful revelation but that the sum of these taxes will fall short of the cost of providing the public good. Here’s where government steps in.  The singled-out agent will be forced to cover the budget shortfall.

Now obviously this is bad policy and is probably infeasible anyway since the poor guy may not be able to pay that much.  But the basic idea can be used in a perfectly acceptable way.  The idea was that by taxing an agent we lose the ability to make use of information about his value so we want to minimize the efficiency loss associated with that.  Ideally we would like to find an individual or group of individuals who are completely indifferent about the public good and tax them.  Since they are indifferent we don’t need their information so we lose nothing by loading all of the tax burden on them.

In fact there is always such a group and it is a very large group:  everybody who is not yet born.  Since they have no information about the value of a public good provided today they are the ideal budget balancers.  Today’s generation uses the efficient VCG mechanism to decide whether to produce the good and future generations are taxed to make up any budget imbalance.

There are obviously other considerations that come into play here and this is an extreme example contrived to make a point.  But let me be explicit about the point.  Balanced budget requirements force today’s generation to internalize all of the costs of their decisions.  It is ingrained in our senses that this is the efficient way to structure incentives.  For if we don’t internalize the externalities imposed on subsequent generations we will make inefficient decisions.  While that is certainly true on many dimensions, it is not a universal truth.  In particular public goods cannot be provided efficiently unless we offload some of the costs to the next generation.

In my youth, every black turtleneck wearing undergraduate hoping to get laid carried around Milan Kundera’s “The Unbearable Lightness of Being“, pretending to be mature enough to understand its rather adult themes.  To separate himself from the herd, a clever but randy student might also have an artfully ink-stained and annotated copy of Thomas Kuhn’s “The Structure of Scientific Revolutions“. He would pose around in coffee-shops and  project a mood of delicate ennui, a mood that could be lifted by an interesting girl who could give some meaning to his life, especially in the bedroom. But only a professional philosopher would have a copy of Kripke’s “Naming and Necessity“.

According to Errol Morris, a filmmaker who made  “The Fog of War: Eleven Lessons From the Life of Robert S. McNamara”, the last book, while not a fundamental contribution to foreplay, is a fundamental contribution to the philosophy of science.  It is somewhat the antithesis of Kuhn’s theory.  So much so that Kuhn forbade Morris, then a philosophy student, from going to Kripke’s lectures.  It seems Morris and Kuhn had a difficult relationship.

I asked him, “If paradigms are really incommensurable, how is history of science possible? Wouldn’t we be merely interpreting the past in the light of the present? Wouldn’t the past be inaccessible to us? Wouldn’t it be ‘incommensurable?’ ”

He started moaning. He put his head in his hands and was muttering, “He’s trying to kill me. He’s trying to kill me.”

And then I added, “…except for someone who imagines himself to be God.”

It was at this point that Kuhn threw the ashtray at me.

And missed.

For that and more, see Morris’ essays in the NYT.  I am going to change my PhD supervision style now I know the norm.

I am standing in front of an intimidating audience and a question stops me. I should know how to answer. I do know the answer. But its not coming to me right away. So the question has me stopped.

There is a silence. And at the center of that silence stands me waiting for the answer to come. At first. But the silence is piling up, and as it does I start to make alternative plans. Up to this point I was just hanging passively as some automatic mechanism searches through the files for the right thread, but now I may have to start actively conjuring something up.

The last thing you want to do is waste precious moments deciding when to cut off that search, especially because as soon as those thoughts start to creep in they threaten to be a self-fulfilling prophecy.

But I can’t not think about it. Because no matter how confident I am that I do have the answer stored in there somewhere there is always a chance that memory fails and as long I stand here, the answer is still not going to come to me. And the longer I wait the less time I am going to have to stammer out something forced. All the while the audience is growing uncomfortable.

This is more than just an optimal stopping problem because of the Zen state variable. Its the Zen fixed point. The more confident you are that the answer will come, the less you will be infiltrated by thoughts of the eventual collapse, the more likely your confidence will be validated. And then there’s what happens when that doesn’t happen.

And then there’s what happens when you know all of the above and it either fuels your confidence (because you are the confident type) or sends you even sooner spiraling into a panic searching for plan B, crowding out plan Absent, all the while escalating the panic (because you are prone to panic) ensuring that whatever finally does come out is going to be a big mess.

Chickle: Type-A Meditation from www.f1me.net

It is about three and a half inches long and mostly black. It has a cap that, when removed, reveals a small silver point, out of the end of which comes black ink. There is a window of clear plastic on the body of the object through which you can monitor how quickly said ink disappears. The general shape is cylindrical. Its diameter is less than one centimeter and fits nicely between the fingers of a woman who is 5’4” tall with slightly oversized hands for her height. The decorative elements are minimal, but there are some advertorial ones. These read: “Pilot. Precise V5. Rolling Ball. Extra Fine.”

The rest of the ode is here.

It occurs to me that in our taxonomy of varieties of public goods, we are missing a category.  Normally we distinguish public goods according to whether they are rival/non-rival and whether they are excludable/non-excludable.  It is generally easier to efficiently finance excludable public goods because people by the threat of exclusion you can get users to reveal how much they are willing to pay for access to the public good.

I read this article about Piracy of sports broadcasts and I started to wonder what effect it will have on the business of sports.  Free availability of otherwise exclusive broadcasts mean that professional sports change from an excludable to a non-excludable public good.  This happened to software and music but unique aspects of those goods enable alternative revenue sources (support in the case of software, live performance in the case of music.)

For sports the main alternative is advertising.  And the only way to ensure that the ads can’t be stripped out of the hijacked broadcast, we are going to see more and more ads directly projected onto the players and the field.

And then I started wondering what would be the analogue of advertising to support other non-excludable public goods.  The key property is that you cannot consume the good without being exposed to the ad.  What about clean air?  National defense?

But then I realized that there is something different about these public goods.  Not only are they not excludable– a user cannot be prevented from using it, but they are not avoidable — the user himself cannot escape the public good.  And there is no reason to finance unavoidable public goods by any means other than taxation.

Here’s the point.  If the public good is avoidable, you can increase the user tax (by bundling ads) and trust that those who don’t value the public good very much will stop using it.  Given the level of the tax it would be inefficient for them to use it.  Knowing that this inefficiency can be avoided you have more flexibility to raise the tax, effectively price discriminating high-value users.

If the public good is unavoidable, everyone pays whether you use ads or just taxation (uncoupled with usage), so there really isn’t any difference.

So this category of an avoidable public good seems a useful one.  Can you think of other examples of non-excludable but avoidable public goods?  Sunsets: avoidable.  National defense:  unavoidable.

“When two dynamite trucks meet on a road wide enough for one, who backs up?” asks Schelling in his classic essay on bargaining.  There are multiple equilibria.  How can the solution be made determinate?

If one side can make a commitment, Schelling points out a easy solution:

“When one wishes to persuade someone that he would not pay more than $16,000 for a house that is really worth $20,000 to him, what can he do to take advantage of the usually superior credibility of the truth over a false assertion? Answer: make it true…..But suppose the buyer could make an irrevocable and enforceable bet with some third party, duly recorded and certified, according to which he would pay for the house no more than $16,000, or forfeit $5,000.”

But what if both sides can make a commitment?  He says:

“Each must now recognize this possibility of stalemate, and take into account the likelihood that the other already has, or will have, signed his own commitment.”

And it is possible there is incomplete information, further complicating the issue.

In this class, I discuss two player bargaining models whether players can commit to demands.  If a demand is rejected or joint commitments are incompatible, there is a chance of bargaining breakdown or costly delay till an agreement is reached.

First, I begin with complete information.  The classic paper is by Rubinstein and it uses discounting to derive a unique equilibrium.  Since, we want to study commitment, I instead followed Myerson’s analysis in his textbook.  In his model, if a proposer’s demand is accepted, the game ends but if it is rejected, the game ends with probability p. If the game survives till the next period, the responder in the previous round becomes the proposer.  The risk of breakdown acts as a discount factor.  The risk of breakdown is a measure of commitment: the higher is p, the higher is the commitment to the demand.  Myerson shows there is a unique equilibrium which is a function of p.

Second, suppose that with a small probability one player might be an r-insistent type who demands r and rejects any smaller offers.  Then, Myerson shows that even if this player is not the r-insistent type, he can guarantee himself r in any equilibrium.  If he demands r repeatedly, the opponent will give up rather than fight forever as he might be facing the r-insistent type.  The rational player then knows he can get r eventually by pretending to the the r-insistent type.  After a few rounds of haggling, his opponent is forced to give this to him. This solution does not depend on p.  Hence, by adding a small probability that a player might be an r-insistent type, we have changed the equilibrium dramatically from the complete information model.  In some sense, the player with an r-insistent type has a first-mover advantage.  The bound on this player’s payoff varies with r so the equilibrium is not robust in the type that was added to the game.

What happens if both players can commit?  Abreu and Gul study this issue.  They show that essentially all bargaining games devolve into a war of attrition where rational players either pretend to to be r-insistent types at incompatible demands or reveal their rationality and concede, like in the Myerson game.  In equilibrium, the probability that players are r-insistent types must reach one simultaneously.  Otherwise, if say it reaches 1 for player 1 first, the rational type of player 2 must still be dropping out after he knows player 1 will not concede.  But it is better for player 2 to deviate and give in earlier and get surplus rather than waste time.  This idea pins down a property of an endpoint to the war of attrition.  And other arguments can then be used to derive the unique equilibrium.

It is surprising the equilibrium is unique: bargaining games and games with incomplete information typically have multiple equilibria.  The equilibrium is still sensitive to the r-insistent types.  I believe this issue is resolved in later papers by Kambe and Abreu and Pearce.  But I did not get to them.  Here are my slides

  1. The device in question almost rhymes with rickshaw.
  2. This explains why there are so many missing parts.
  3. During my years in Berkeley I made sure that this man told me “I hate you” at least once a week. (Where’s Stoney Burke??)
  4. Best use of dead trees ever. (But what if she’s actually crying?)
  5. 16 items available only in Chinese WalMarts.

Bryan Caplan wonders whether economic theory is on the decline. Here are some signs I have noticed:

  1. Econometrica, the most theory-oriented of the top 4 journals has a well-publicized mission to publish more applied, general interest articles, and this is indeed happening.  This comes at the expense of pure theory as well as theoretical econometrics.
  2. The new PhD market was, on the whole, difficult for theorists this year.  Strong candidates from Yale, Stanford, NYU and Princeton were placed much lower than expectations, some without a job offer in North America as of yet.  As far as I can tell, there will be only two junior theorists hired at top 5 departments.

But there are many positive signs too

  1. Theorists have been recruiting targets for high-profile private sector jobs.  Michael Schwarz and Preston McAfee at Yahoo!, Susan Athey at Microsoft for example.  In addition the research departments in these places are full of theorists-on-leave.
  2. Despite some overall weakness, theory is and always has been well represented at the top of the junior market.  This year Alex Wolitzky, as pure a theorist as there is, is the clear superstar of the market.  Here is the list of invitees to the Review of Economics Studies Tour from previous years.  This is generally considered to be an all-star team of new PhDs in each year.  Two theorists out of seven per year on average.  (No theorist last year though.)
  3. In recent years, two new theory journals, Theoretical Economics and American Economic Journal:  Microeconomics, have been adopted by the leading Academic Societies in economics.  These journals are already going strong.
  4. Market design is an essentially brand new field and one of the most important contributions of economics in recent years.  It is dominated by theorists.

In my opinion there are some signs of change but, correctly interpreted, these are mostly for the better.  Decision theory, always the most esoteric of subfields has moved to the forefront as a second wave of behavioral economics.  Macroeconomics today is more heavily thoery-oriented than ever.  Theorists (and theory journals) are drawn away from pure theory and toward applied theory not because pure theory has diminished in any absolute sense, but rather because applied theory has become more important than ever.

Professor Caplan offers some related observations in his commentary:

…mathematicians masquerading as economists were never big at GMU, and it’s hard to see how they could do well in the blogosphere either.

I am sure he is not talking about Sandeep and me because we are just as bad at math as all of the other bloggers who pretend to be economists.  But just in case he is, I invite him to take a look around.  Finally,

My econjobrumors insider tells me that its countless trolls are largely frustrated theorists who feel cheated of the respect they think the profession owes them.  Speculation, yes, but speculation born of years of study of their not-so-silent screams.

He is talking about the people who anonymously post sometimes hilarious, sometimes obnoxious vitriol on that outpost of grad student angst known as EJMR. I wonder how he could possibly know the research area of anonymous posters to that web site? Among all the economists who feel cheated out of the respect that they think the profession owes them why would it be that theorists are the most likely to troll?

A principal employs an agent to collect taxes.  The principal cannot observe how hard the agent is working.  What is the best way to set up incentives?  Suppose the principal asks for a share of the tax revenue.  This acts as a tax on the agent’s effort so he will under-invest.  The agent also has the incentive to hide tax revenue.  Not the best incentive scheme.  Perhaps the only practical alternative is to “sell the firm to the agent”: demand a lump-sum fee for the right to become a tax collector.  The agent has great incentives to generate tax revenue.  The principal extracts the tax collector’s expected profit upfront via the lump sum fee.  A classic two-part tariff.

According to Dexter Filkins in the New Yorker, here is how bribery works in Afghanistan:

Bribes feed bribes: if an Afghan aspires to be a district police officer, he must often pay a significant amount, around fifty thousand dollars, to his boss, who is usually the provincial police chief. The policeman earns back the money by shaking down ordinary Afghans… “It’s a vertically integrated criminal enterprise,” one American official told me.

Lagers like Stella Artois are bottom-fermented while classical Belgian beers are top-fermented.  Lagers are more transparent and look “cleaner” in a transparent glass.  Perhaps because of this, demand shifted to lagers.  The share of lager beers in Belgium went from 15% before WW1 to 70% after WWII.

Bottom fermentation requires more equipment to cool the beer during fermentation and maturation.  Hence, it gains from greater scale.  Greater demand plus cost economies led to the market shifting towards a few large breweries. These set a lower price than smaller breweries and drove them out of business.  Add to this the costs of advertising and scale advantages multiply….

For this and and more on Lambics and Abbey beers see Belgian Beers: Where History Meets Globalization

From a fun little article by Andrew Gelman and Deborah Nolan:

The law of conservation of angular momentum tells us that once the coin is in the air, it spins at a nearly constant rate (slowing down very slightly due to air resistance). At any rate of spin, it spends half the time with heads facing up and half the time with heads facing down, so when it lands, the two sides are equally likely (with minor corrections due to the nonzero thickness of the edge of the coin); see Figure 3. Jaynes (1996) explained why weighting the coin has no effect here (unless, of course, the coin is so light that it floats like a feather): a lopsided coin spins around an axis that passes through its center of gravity, and although the axis does not go through the geometrical center of the coin, there is no difference in the way the biased and symmetric coins spin about their axes.

On the other hand, a weighted coin spun on a table will show a bias for the weighted side.  The article describes some experiments and statistical tests to use in the classroom.  There are some entertaining stories too.  Like how the King of Norway avoided losing the entire Island of Hising to the King of Sweden by rolling a 13 with a pair of dice (“One die landed six, and the other split in half landing with both a six and a one showing.”)

Visor volley:  Toomas Hinnosaar.

I wrote last week about More Guns, Less Crime.  That was the theory, let’s talk about the rhetoric.

Public debates have the tendency to focus on a single dimension of an issue with both sides putting all their weight behind arguments on that single front.  In the utilitarian debate about the right to carry concealed weapons, the focus is on More Guns, Less Crime. As I tried to argue before, I expect that this will be a lost cause for gun control advocates.  There just isn’t much theoretical reason why liberalized gun carry laws should increase crime.  And when this debate is settled, it will be a victory for gun advocates and it will lead to a discrete drop in momentum for gun control (that may have already happened.)

And that will be true despite the fact that the real underlying issue is not whether you can reduce crime (after all there are plenty of ways to do that,) but at what cost.  And once the main front is lost, it will be too late for fresh arguments about externalities to have much force in public opinion.  Indeed, for gun advocates the debate could not be more fortuitously framed if the agenda were set by a skilled debater.  A skilled debater knows the rhetorical value of getting your opponent to mount a defense and thereby implicitly cede the importance of a point, and then overwhelming his argument on that point.

Why do debates on inherently multi-dimensional issues tend to align themselves so neatly on one axis?  And given that they do, why does the side that’s going to lose on those grounds play along?  I have a theory.

Debate is not about convincing your opponent but about mobilizing the spectators.  And convincing the spectators is neither necessary nor sufficient for gaining momentum in public opinion.  To convince is to bring others to your side.  To mobilize is to give your supporters reason to keep putting energy into the debate.

The incentive to be active in the debate is multiplied when the action of your supporters is coordinated and when the coordination among opposition is disrupted.  Coordinated action is fueled not by knowledge that you are winning the debate but by common knowledge that you are winning the debate.  If gun control advocates watch the news after the latest mass killing and see that nobody is seriously representing their views, they will infer they are in the minority and give up the fight even if in fact they are in the majority.

Common knowledge is produced when a publicly observable bright line is passed.  Once that single dimension takes hold in the public debate it becomes the bright line:  When the dust is settled it will be common knowledge who won. A second round is highly unlikely because the winning side will be galvanized and the losing side demoralized.  Sure there will be many people, maybe even most, who know that this particular issue is of secondary importance but that will not be common knowledge.  So the only thing to do is to mount your best offense on that single dimension and hope for a miracle or at least to confuse the issue.

(Real research idea for the vapor mill.  Conjecture:  When x and y are random variables it is “easier” to generate common knowledge that x>0 than to generate common knowledge that x>y.)

Chickle:  Which One Are You Talking About? from www.f1me.net.

David Mitchell is a stammerer who wrote beautifully about it in his semi-autobiographical novel Black Swan Green. Here is Mitchell on The King’s Speech.  In the article he talks about his own strategies for coping with stammering:

If these technical fixes tackle the problem once it’s begun, “attitudinal stances” seek to dampen the emotions that trigger my stammer in the first place. Most helpful has been a sort of militant indifference to how my audience might perceive me. Nothing fans a stammer’s flames like the fear that your listener is thinking “Jeez, what is wrong with this spasm-faced, eyeball-popping strangulated guy?” But if I persuade myself that this taxing sentence will take as long as it bloody well takes and if you, dear listener, are embarrassed then that’s your problem, I tend not to stammer. This explains how we can speak without trouble to animals and to ourselves: our fluency isn’t being assessed. This is also why it’s helpful for non-stammerers to maintain steady eye contact, and to send vibes that convey, “No hurry, we’ve got all the time in the world.”

(Gat Gape:  The Browser) Incidentally, I watched The King’s Speech and also True Grit on a flight to San Francisco Sunday night while the Oscars were being handed out down below. I enjoyed the portrayal of stammering in TKS but unlike Mitchell I didn’t think that subject matter alone carried an entire film.  And there wasn’t much else to it.  (And by the way here is Christopher Hitchens complaining about the softie treatment of Churchill and King Edward VIII.)

True Grit was also a big disappointment.  I haven’t seen Black Swan but I hear it has some great kung fu scenes.

Whenever I teach the Vickrey auction in my undergraduate classes I give this question:

We have seen that when a single object is being auctioned, the Vickrey  (or second-price) auction ensures that bidders have a dominant strategy to bid their true willingness to pay. Suppose there are k>1 identical objects for sale.  What auction rule would extend the Vickrey logic and make truthful bidding a dominant strategy?

Invariably the majority of students give the intuitive, but wrong answer.  They suggest that the highest bidder should pay the second-highest bid, the second-highest bidder should pay the third-highest bid, and so on.

Did you know that Google made the same mistake?  Google’s system for auctioning sponsored ads for keyword searches is, at its core, the auction format that my undergraduates propose (plus some bells and whistles that account for the higher value of being listed closer to the top and Google’s assessment of the “quality” of the ads.)  And indeed Google’s marketing literature proudly claims that it “uses Nobel Prize-winning economic theory.”  (That would be Vickrey’s Nobel.)

But here’s the remarkable thing.  Although my undergraduates and Google got it wrong, in a seemingly miraculous coincidence, when you look very closely at their homebrewed auction, you find that it is not very different at all from the (multi-object) Vickrey mechanism.  (In case you are wondering, the correct answer is that all of the k highest bidders should pay the same price: the k+1st highest bid.)

In a famous paper, Edelman, Ostrovsky and Schwarz (and contempraneously Hal Varian) studied the auction they named The Generalized Second Price Auction (GSPA) and showed that it has an equilibrium in which bidders, bidding optimally, effectively undo Google’s mistaken rule and restore the proper Vickrey pricing schedule.  It’s not a dominant strategy, but it is something pretty close:  if everyone bids this way no bidder is going to regret his bid after the auction is over. (An ex post equilibrium.)

Interestingly this wasn’t the case with the old style auctions that were in use prior to the GSPA.  Those auctions were based on a first-price model in which the winners paid their own bids.  In such a system you always regret your bid ex post because you either bid too much (anything more than your opponents’ bid plus a penny is too much) or too little.  Indeed, advertisers used software agents to modify their standing bids at high-frequencies in order to minimize these mistakes.  In practice this meant that auction outcomes were highly volatile.

So the Google auction was a happy accident.  On the other hand, an auction theorist might say that this was not an accident at all.  The real miracle would have been to come up with an auction that didn’t somehow reduce to the Vickrey mechanism.  Because the revenue equivalence theorem says that the exact rules of the auction matter only insofar as they determine who the winners are.  Google could use any mechanism and as long as its guaranteed that the bidders with the highest values will win, that can be accomplished in an ex post equilibrium with the bidders paying exactly what they would have paid in the Vickrey mechanism.

Suppose there are players and each has private information about how tough they are.  The two toughness parameters together determine the probability of winning should there be a war.  If the parameters are common knowledge, it is possible to avoid war by making a transfer that makes war pointless.  By making a transfer, the target has less resources to capture and the challenger has more to lose and an appropriate transfer can create the right balance to avoid war.  But if there is incomplete information, a player might start  a war.

Is it possible to set up transfers to completely prevent inefficient war?  Myerson and Satterthwaite asked this question in a classical model of trade with incomplete information.  We can use similar techniques to answer a similar question in a conflict scenario.  In other words, we can use the revelation principle and ask whether it is possible to design transfers as a function of reports to guarantee peace in all circumstances.  Players’ types – their toughness parameters – directly affect their payoffs only if there is war.  Since there is no war in equilibrium, it is impossible to separate out different types and transfers must be constant as a function of reports.  The constant payoff each player then receives must be enough to dissuade his toughest type from starting a war.  If this is impossible to guarantee for both players’ toughest types simultaneously, there must be war.  Here are the slides.

If you’ve ever sat down at a pub to a plate of really good fish and chips—the kind in which the fish stays tender and juicy but the crust is supercrisp—odds are that the cook used beer as the main liquid when making the batter. Beer makes such a great base for batter because it simultaneously adds three ingredients—carbon dioxide, foaming agents and alcohol—each of which brings to bear different aspects of physics and chemistry to make the crust light and crisp.

The CO2 escaping from the frying batter makes for a light texture.  This effect is enhanced by the low surface tension, (which in the glass makes the foamy head), keeping the bubbles in place for the duration of the cooking process.  And the alcohol evaporates faster than water so that the crust sets quickly reducing the risk of overcooking.  The story is in Scientific American.

I was walking along, and I saw just this hell of a big moose turd, I mean it was a real steamer! So I said to myself, “self, we’re going to make us some moose turd pie.” So I tipped that prairie pastry on its side, got my sh*t together, so to speak, and started rolling it down towards the cook car: flolump, flolump, flolump. I went in and made a big pie shell, and then I tipped that meadow muffin into it, laid strips of dough across it, and put a sprig of parsley on top. It was beautiful, poetry on a plate, and I served it up for dessert.

Here’s one of the thorniest incentive problems known to man.  In an organization there is a job that has to be done.  And not just anybody can do it well, you really need to find the guy who is best at it.  The livelihood of the organization depends on it.  But the job is no fun and everyone would like to get out of doing it.  To make matters worse, performance is so subjective that no contract can be written to compensate the designee for a job well done.

The core conflict is exemplified in a story by Utah Phillips about railroad workers living out in the field as they work to level the track.  Someone has to do the cooking for the team and nobody wants to do it.  Lacking any better incentive scheme they went by the rule that if you complained about the food then from now on you were going to have to do the cooking.

You can see the problem with this arrangement.  But is there any better system?  You want to find the best cook but the only way to reward him is to relieve him of the job.  That would be self defeating even if you could get it to work.  You probably couldn’t because who would be willing to say the food was good if it meant depriving themselves of it the next time?

A simple rotation scheme at least has the benefit of removing the perverse incentive.  Then on those days when the best cook has the job we can trust that he will make a good meal out of his own self interest.  He might even volunteer to be the cook.

But it might be optimal to rule out volunteering too.  Because that could just bring back the original incentive problem in a new form.  Since ex ante nobody knows who the best cook is, everyone will set out to prove that they are incapable of making a palatable meal so that the one guy who actually can cook, whoever he is, will volunteer.

It may help to keep the identity of the cook secret.  Then when a capable cook actually has the job he can feel free to make a good meal without worrying that he will be recruited permanently.  It will also lower the incentive for the others to make a bad meal because nobody will know who to exclude in the future.

Even if there is no scheme that really solves the incentive problem, the freedom to complain is essential for organizational morale.

Well, this big guy come into the mess car, I mean, he’s about 5 foot forty, and he sets himself down like a fool on a stool, picked up a fork and took a big bite of that moose turd pie. Well he threw down his fork and he let out a bellow, “My God, that’s moose turd pie!”

“It’s good though.”

  1. Watson v Pynchon v McCarthy
  2. When chivalry was still alive, you would never put a bag over her head.  A gentleman would use a basket instead.
  3. However there was one catch, he would have to be awake and playing the banjo whilst under the knife.” (with video.)
  4. PDF of junior Ghaddafi’s PhD thesis from the LSE.

Research by Chris Avery, Andrew Fairbanks and Richard Zeckhauser showed that early admissions (EA) programs give applicants a boost in college admissions.  Improved chances of admissions might reflect a better applicant pool and not an advantage built into the early admissions process.  But Avery et al controlled for this and still found that EA gives applicants an advantage.

EA applicants are constrained in their choices should they actually be admitted.  To attract them, colleges have to offer lower standards for acceptance for early admission than for regular applicants who have more freedom of choice.  Done in isolation, EA might benefit a college, as it steals above average students from its competitors.  But if one college is employing EA, so must others, to recapture some of those students they lost.  When all colleges use EA, the average quality of the student pool in each college may actually decline, because slots are taken up by the lower quality early applicants who crowd out high quality regular applicants.  A Prisoners’ Dilemma.

There are other effects.  Early applicants get worse deals on financial aid as they cannot play off multiple offers.  So, early admission will attract wealthy students.  They will also be more clued into the system.  There are impacts on diversity.

But one or two colleges cannot change the equilibrium on their own.  All have to give it up.  Harvard and Princeton tried to drop EA but most other colleges did not.  After all, the quality of Yale etc EA applicants goes up as Harvard and Princeton drop their programs.  So it all fell apart and now both Princeton and Harvard have reinstated EA.  A Harvard Dean says:

“We looked carefully at trends in Harvard admissions these past years and saw that many highly talented students, including some of the best-prepared low-income and underrepresented minority students, were choosing programs with an early-action option, and therefore were missing out on the opportunity to consider Harvard.”

EA can impact all sorts of settings.  A player can try to cream skim before competitors notice.  And so NU student Kota Saito heads off to Caltech without even going on the job market.  I know MEDS has dome something similar in the past.  Will other universities start doing this sort of thing too?

My duaghter’s 4th grade class read the short story “The Three Questions” by Tolstoy (a two minute read.)  This afternoon I led a discussion of the story. Here are my notes.

There is a King who decides that he needs the answers to three questions.

  1. What is the best time to act?
  2. Who is the most important person to listen to?
  3. What is the most important thing to do?

Because if he knew the answers to these questions he would never fail in what he set out to do.  He sends out a proclomation in his Kingdom offering a reward to anyone who can answer these questions but he is disappointed because although many offer answers…

All the answers being different, the King agreed with none of them, and gave the reward to none.

So instead he went to see a hermit who lived alone in the Wood and who might be able to answer his questions.  The King and the hermit spend the day in silence digging beds in the ground.  Growing impatient, the King confronts the hermit and makes one final request for the answers to the King’s questions.  But before the hermit is able to respond they are interrupted by a wounded stranger who needs their help.  They bandage the stranger and lay him in bed and the King himself falls asleep and does not awake until the next morning.

At it turns out the stranger was intending to murder the King but was caught by the King’s bodyguard and stabbed.  Unkowingly the King saved his enemy’s life and now the man was eternally grateful and begging for the King’s forgiveness. The King returns to the hermit and asks again for the answers to his questions.

“Do you not see,” replied the hermit. “If you had not pitied my weakness yesterday, and had not dug those beds for me, but had gone your way, that man would have attacked you, and you would have repented of not having stayed with me. So the most important time was when you were digging the beds; and I was the most important man; and to do me good was your most important business. Afterwards when that man ran to us, the most important time was when you were attending to him, for if you had not bound up his wounds he would have died without having made peace with you. So he was the most important man, and what you did for him was your most important business. Remember then: there is only one time that is important– Now! It is the most important time because it is the only time when we have any power. The most necessary man is he with whom you are, for no man knows whether he will ever have dealings with any one else: and the most important affair is, to do him good, because for that purpose alone was man sent into this life!”

We are left to decide for ourselves what the King will do with these answers. The King abhors uncertainty. This is why he discarded the many different answers given by the learned men in his Kingdom. The simplicity of the hermit’s advice is bound the appeal to the King. It is certainly a rule that can be applied in any situation. And it is indeed motivated by acknowledgement of uncertainty in the extreme.  The Here and Now are the only certainties. And it follows from uncertainty about where you will be in the future, with whom you will be, and what options will be before you that the Here and Now are the most important.

(The hermit is not only outlining a foundation for hyperbolic discounting, but also a Social Welfare analog.  Your social welfare function should heavily discount all people except those who are before you right now.)

But what would come of the King were he to follow the advice of the hermit? Imagine what it would be like to live like that. Would you ever even make it to the bathroom to brush your teeth? How many opportunities and people would distract you along the way?

If the hermit’s advice were any good then surely the hermit himself must follow it. Perhaps the hermit was a King once.

You probably know the Ellsberg urn experiment. In the urn on the left there are 50 black balls and 50 red balls. In the urn on the right there are 100 balls, some of them red and some of them black. No further information about the urn on the right is given. Subjects are allowed to pick an urn and bet on a color. They win $1 if the ball drawn from the urn they selected is the color they bet.

Subjects display aversion to ambiguity: they strictly prefer to bet on the left urn where the odds are known than on the right urn where the odds are unknown. This is known as Ellsberg’s paradox, because whatever probabilities you attach to the distribution of balls in the right urn, there is a color you could bet on and do at least as well as the left urn. This experiment revealed a new dimension to attitudes towards uncertainty that has the potential to explain many puzzles of economic behavior. (The most recent example being the job-market paper of Gharad Bryan from Yale who studies the extent to which ambiguity can explain insurance market failures in developing countries.)

Decades and thousands of papers on the subject later, there remains a famous critique of the experiment and its interpretation due to Raiffa. The subjects could “hedge” against the ambiguity in the right urn by tossing a coin to decide whether to bet on red or black. To see the effect of this note that if there are n black balls and (100-n) red balls, then the coin toss means that with 50% probability you bet on black and win with n% probability and with 50% probability you bet you red and with with (100-n)% probability giving you a total probability of winning equal to 50%. Exactly the same odds as the left urn no matter what the actual value of n is. Given this ability to remove ambiguity altogether, the choices of the subjects cannot be interpreted as having anything to do with ambiguity aversion.

Kota Saito begins with the observation that the Raiffa randomization is only one of two ways to remove the ambiguity from the right urn. Another way is to randomize ex post. Hypothetically: first draw the ball, observe its color, and then toss a coin to decide whether to bet on red or black. Like the ex-ante coin tossing, this strategy guarantees that you have a 50% chance of winning. Kota points out that theories that formalize ambiguity assume that these two strategies are viewed equivalently by decision-makers. If a subject is ambiguity averse, then he prefers either form of randomization to the right urn and he views either of them as indifferent to the left urn.

But the distinct timing makes them conceptually different. In the ex ante case, after the coin is tossed and you decide to bet on red, say, you still face ambiguity going forward just as you would have if you chosen to bet on red without tossing a coin. In the ex post case, all of the ambiguity is removed once you have decided how to bet. (There is an old story about Mark Machina’s mom that relates to this. See example 2 here.)

Kota disentangles these objects and models a decision-maker who may have distinct attitudes to these two ways of mixing objective randomization with subjectively uncertain prospects. In particular he weakens the standard axiom which requires that the order of uncertainty resolution doesn’t matter to the decision-maker. With this weaker assumption he is able to derive an elegant model in which a single parameter encodes the decision-makers pattern of ambiguity attitudes. Interestingly, the theory implies that certain patterns will not arise. For example, any decision-maker who satisfies Kota’s axioms and who displays neutrality toward ex post ambiguity must also display neutrality toward ex ante ambiguity. All other patterns are possible. As it happens, this is exactly what is found in an experimental study by Dominiak and Shnedler.

What’s really cool about the paper is that Kota uses exactly the same setup and axioms to derive a theory of fairness in the presence of randomization. A basic question is whether the “fair” thing to do is to toss a coin to decide who gets a prize, or to give each person an identical, independent lottery. Compared to theories of uncertainty attitudes, our models of preferences for fairness are much less advanced and have barely touched on this kind of question. Kota’s model brings that literature very far very fast.

Kota is a Northwestern Phd (to be) who just defended his dissertation today. You could call it a shotgun defense because Kota’s job market was highly unusual. As a 4th year student he was not planning to go on the market until next year, but CalTech discovered him and plucked him off the bench and into the Big Leagues. He starts as an Assistant Professor there in the Fall. Congratulations Kota!

Believe it or not that line of thinking does lie just below the surface in many recruiting discussions.  The recruiting committee wants to hire good people but because the market moves quickly it has to make many simultaneous offers and runs the risk of having too many acceptances.  There is very often a real feeling that it is safe to make offers to the top people who will come with low probability but that its a real risk to make an offer to someone for whom the competition is not as strong and who is therefore likely to accept.

This is not about adverse selection or the winner’s curse.  Slot-constraint considerations appear at the stage where it has already been decided which candidates we like and all that is left is to decide which ones we should offer.  Anybody who has been involved in recruiting decisions has had to grapple with this conundrum.

But it really is a phantom issue.  It’s just not possible to construct a plausible model under which your willingness to make an offer to a candidate is decreasing in the probability she will come.  Take any model in which there is a (possibly increasing) marginal cost of filling a slot and candidates are identified by their marginal value and the probability they would accept an offer.

Consider any portfolio of offers which involves making an offer to candidate F. The value of that portfolio is a linear function of the probability that F accepts the offer.  For example, consider making offers to two candidates F and O.  The value of this portfolio is

q_O [ q_F (v_F +v_O - C(2))+( 1 - q_F )(v_O - C(1) ) ]

+(1 -q_O)[q_F v_F-C(1)]

where q_O and q_F are the acceptance probabilities, v_O and v_F are the values and C(\cdot) is the cost of hiring one or two candidates in total.  This can be re-arranged to

q_F \left[q_O\left(v_F-MC(2)\right)+(1- q_O) \left(v_F - C(1)\right) \right] + const.

where MC(2) = C(2) - C(1) is the marginal cost of a second hire.  If the bracketed expression is positive then you want to include F in the portfolio and the value of doing so only gets larger as q_F increases. (note to self:  wordpress latex is whitespace-hating voodoo)

In particular, if F is in the optimal portfolio, then that remains true when you raise q_F.

It’s not to say that there aren’t interesting portfolio issues involved in this problem.  One issue is that worse candidates can crowd out better ones.  In the example, as the probability that F accepts an offer, q_F, increases you begin to drop others from the portfolio.  Possibly even others who are better than F.

For example, suppose that the department is slot-constrained and would incur the Dean’s wrath if it hired two people this year.  If v_O > v_F so that you prefer candidate O, you will nevertheless make an offer only to F if q_F is very high.

In general, I guess that the optimal portfolio is a hard problem to solve.  It reminds me of this paper by Hector Chade and Lones Smith.  They study the problem of how many schools to apply to, but the analysis is related.

What is probably really going on when the titular quotation arises is that factions within the department disagree about the relative values of F and O.  If F is a theorist and O a macro-economist, the macro-economists will foresee that a high q_F means no offer for O.

Another observation is that Deans should not use hard offer constraints but instead expose the department to the true marginal cost curve, understanding that the department will make these calculations and voluntarily ration offers on its own.  (When q_F is not too high, it is optimal to make offers to both and a hard offer constraint prevents that.)