You are currently browsing jeff’s articles.

I don’t have a Kindle but I noticed that people were complaining so much about the absence of page numbers on early versions that Amazon has restored page numbers in the latest Kindle software. This adherence to tradition (in which I include prudish Professors and Editors who demand precise page references in Bibliographies) destroys a unique advantage of eBooks that could make them more than just a fragile, signal-jamming replacement for old fashioned pulp.

Suspense requires randomization. If you are reading my paper-bound novel and I want to maximize your suspense I am constrained by your ability to infer, based on how many pages are left, the likelihood that the story is going to play out as staged or whether there will be another twist in the plot. It is impossible for me to convince you of a “false ending” if you are on page 200 out of 400. The bastard publisher has spoiled it for me because 1) he has, without my permission, smeared page numbers all over my handiwork, and 2) refused to add bulk by randomly insert blank pages at the end to help me fool you.

Now Kindle, and eBook readers in general allow me to shuck that constraint. I can end the novel at any point and you would never know that the end is right around the corner. I could make it 1 page long. Imagine the effect of that! I could make it grind to a halt on page 200 only to surprise you with a development completely out of the blue that takes another 200 pages to resolve.

But no, you can’t handle the suspense. You call yourself a reader but you are really just a page counter. You begged for your time-marking crutch and Amazon obliged. Your loss, my novel goes back in the drawer.

P.S. Emir Kamenica gets some of the blame for this post.

Q.S. Quote from my buddy Dave: The key is to have a useful term–for example, I have stopped using the term “page number” and now use the term “oprah” to refer to the location in printed matter. I encourage you to start using this in the classroom. “All right, please turn to Oprah 31”)

An important role of government is to provide public goods that cannot be provided via private markets. There are many ways to express this view theoretically, a famous one using modern theory is Mailath-Postlewaite.  (Here is a simple exposition.) They consider a public good that potentially benefits many individuals and can be provided at a fixed per-capita cost C.  (So this is a public good whose cost scales proportionally with the size of the population.)

Whatever institution is supposed to supply this public good faces the problem of determining whether the sum of all individual’s values exceeds the cost.  But how do you find out individual’s values?  Without government intervention the best you can do is ask them to put their money where their mouths are.  But this turns out to be hopelessly inefficient.  For example if everybody is expected to pay (at least) an equal share of the cost, then the good will produced only if every single individual has a willingness to pay of at least C.  The probability that happens shrinks to zero exponentially fast as the population grows.  And in fact you can’t do much better than have everyone pay an equal share.

Government can help because it has the power to tax.  We don’t have to rely on voluntary contributions to raise enough to cover the costs of the good. (In the language of mechanism design, the government can violate individual rationality.) But compulsory contributions don’t amount to a free lunch:  if you are forced to pay you have no incentive to truthfully express your true value for the public good.  So government provision of public goods helps with one problem but exacerbates another.  For example if the policy is to tax everyone then nobody gives reliable information about their value and the best government can do is to compare the cost with the expected total value.  This policy is better than nothing but it will often be inefficient since the actual values may be very different.

But government can use hybrid schemes too.  For example, we could pick a representative group in the population and have them make voluntary contributions to the public good, signaling their value.  Then, if enough of them have signaled a high willingness to pay, we produce the good and tax everyone else an equal share of the residual cost.  This way we get some information revelation but not so much that the Mailath Postlewaite conclusion kicks in.

Indeed it is possible to get very close to the ideal mechanism with an extreme version of this.  You set aside a single individual and then ask everyone else to announce their value for the public good.  If the total of these values exceeds the cost you produce the public good and then charge them their Vickrey-Clarke-Groves (VCG) tax.  It is well known that these taxes provide incentives for truthful revelation but that the sum of these taxes will fall short of the cost of providing the public good. Here’s where government steps in.  The singled-out agent will be forced to cover the budget shortfall.

Now obviously this is bad policy and is probably infeasible anyway since the poor guy may not be able to pay that much.  But the basic idea can be used in a perfectly acceptable way.  The idea was that by taxing an agent we lose the ability to make use of information about his value so we want to minimize the efficiency loss associated with that.  Ideally we would like to find an individual or group of individuals who are completely indifferent about the public good and tax them.  Since they are indifferent we don’t need their information so we lose nothing by loading all of the tax burden on them.

In fact there is always such a group and it is a very large group:  everybody who is not yet born.  Since they have no information about the value of a public good provided today they are the ideal budget balancers.  Today’s generation uses the efficient VCG mechanism to decide whether to produce the good and future generations are taxed to make up any budget imbalance.

There are obviously other considerations that come into play here and this is an extreme example contrived to make a point.  But let me be explicit about the point.  Balanced budget requirements force today’s generation to internalize all of the costs of their decisions.  It is ingrained in our senses that this is the efficient way to structure incentives.  For if we don’t internalize the externalities imposed on subsequent generations we will make inefficient decisions.  While that is certainly true on many dimensions, it is not a universal truth.  In particular public goods cannot be provided efficiently unless we offload some of the costs to the next generation.

I am standing in front of an intimidating audience and a question stops me. I should know how to answer. I do know the answer. But its not coming to me right away. So the question has me stopped.

There is a silence. And at the center of that silence stands me waiting for the answer to come. At first. But the silence is piling up, and as it does I start to make alternative plans. Up to this point I was just hanging passively as some automatic mechanism searches through the files for the right thread, but now I may have to start actively conjuring something up.

The last thing you want to do is waste precious moments deciding when to cut off that search, especially because as soon as those thoughts start to creep in they threaten to be a self-fulfilling prophecy.

But I can’t not think about it. Because no matter how confident I am that I do have the answer stored in there somewhere there is always a chance that memory fails and as long I stand here, the answer is still not going to come to me. And the longer I wait the less time I am going to have to stammer out something forced. All the while the audience is growing uncomfortable.

This is more than just an optimal stopping problem because of the Zen state variable. Its the Zen fixed point. The more confident you are that the answer will come, the less you will be infiltrated by thoughts of the eventual collapse, the more likely your confidence will be validated. And then there’s what happens when that doesn’t happen.

And then there’s what happens when you know all of the above and it either fuels your confidence (because you are the confident type) or sends you even sooner spiraling into a panic searching for plan B, crowding out plan Absent, all the while escalating the panic (because you are prone to panic) ensuring that whatever finally does come out is going to be a big mess.

Chickle: Type-A Meditation from www.f1me.net

It is about three and a half inches long and mostly black. It has a cap that, when removed, reveals a small silver point, out of the end of which comes black ink. There is a window of clear plastic on the body of the object through which you can monitor how quickly said ink disappears. The general shape is cylindrical. Its diameter is less than one centimeter and fits nicely between the fingers of a woman who is 5’4” tall with slightly oversized hands for her height. The decorative elements are minimal, but there are some advertorial ones. These read: “Pilot. Precise V5. Rolling Ball. Extra Fine.”

The rest of the ode is here.

It occurs to me that in our taxonomy of varieties of public goods, we are missing a category.  Normally we distinguish public goods according to whether they are rival/non-rival and whether they are excludable/non-excludable.  It is generally easier to efficiently finance excludable public goods because people by the threat of exclusion you can get users to reveal how much they are willing to pay for access to the public good.

I read this article about Piracy of sports broadcasts and I started to wonder what effect it will have on the business of sports.  Free availability of otherwise exclusive broadcasts mean that professional sports change from an excludable to a non-excludable public good.  This happened to software and music but unique aspects of those goods enable alternative revenue sources (support in the case of software, live performance in the case of music.)

For sports the main alternative is advertising.  And the only way to ensure that the ads can’t be stripped out of the hijacked broadcast, we are going to see more and more ads directly projected onto the players and the field.

And then I started wondering what would be the analogue of advertising to support other non-excludable public goods.  The key property is that you cannot consume the good without being exposed to the ad.  What about clean air?  National defense?

But then I realized that there is something different about these public goods.  Not only are they not excludable– a user cannot be prevented from using it, but they are not avoidable — the user himself cannot escape the public good.  And there is no reason to finance unavoidable public goods by any means other than taxation.

Here’s the point.  If the public good is avoidable, you can increase the user tax (by bundling ads) and trust that those who don’t value the public good very much will stop using it.  Given the level of the tax it would be inefficient for them to use it.  Knowing that this inefficiency can be avoided you have more flexibility to raise the tax, effectively price discriminating high-value users.

If the public good is unavoidable, everyone pays whether you use ads or just taxation (uncoupled with usage), so there really isn’t any difference.

So this category of an avoidable public good seems a useful one.  Can you think of other examples of non-excludable but avoidable public goods?  Sunsets: avoidable.  National defense:  unavoidable.

  1. The device in question almost rhymes with rickshaw.
  2. This explains why there are so many missing parts.
  3. During my years in Berkeley I made sure that this man told me “I hate you” at least once a week. (Where’s Stoney Burke??)
  4. Best use of dead trees ever. (But what if she’s actually crying?)
  5. 16 items available only in Chinese WalMarts.

Bryan Caplan wonders whether economic theory is on the decline. Here are some signs I have noticed:

  1. Econometrica, the most theory-oriented of the top 4 journals has a well-publicized mission to publish more applied, general interest articles, and this is indeed happening.  This comes at the expense of pure theory as well as theoretical econometrics.
  2. The new PhD market was, on the whole, difficult for theorists this year.  Strong candidates from Yale, Stanford, NYU and Princeton were placed much lower than expectations, some without a job offer in North America as of yet.  As far as I can tell, there will be only two junior theorists hired at top 5 departments.

But there are many positive signs too

  1. Theorists have been recruiting targets for high-profile private sector jobs.  Michael Schwarz and Preston McAfee at Yahoo!, Susan Athey at Microsoft for example.  In addition the research departments in these places are full of theorists-on-leave.
  2. Despite some overall weakness, theory is and always has been well represented at the top of the junior market.  This year Alex Wolitzky, as pure a theorist as there is, is the clear superstar of the market.  Here is the list of invitees to the Review of Economics Studies Tour from previous years.  This is generally considered to be an all-star team of new PhDs in each year.  Two theorists out of seven per year on average.  (No theorist last year though.)
  3. In recent years, two new theory journals, Theoretical Economics and American Economic Journal:  Microeconomics, have been adopted by the leading Academic Societies in economics.  These journals are already going strong.
  4. Market design is an essentially brand new field and one of the most important contributions of economics in recent years.  It is dominated by theorists.

In my opinion there are some signs of change but, correctly interpreted, these are mostly for the better.  Decision theory, always the most esoteric of subfields has moved to the forefront as a second wave of behavioral economics.  Macroeconomics today is more heavily thoery-oriented than ever.  Theorists (and theory journals) are drawn away from pure theory and toward applied theory not because pure theory has diminished in any absolute sense, but rather because applied theory has become more important than ever.

Professor Caplan offers some related observations in his commentary:

…mathematicians masquerading as economists were never big at GMU, and it’s hard to see how they could do well in the blogosphere either.

I am sure he is not talking about Sandeep and me because we are just as bad at math as all of the other bloggers who pretend to be economists.  But just in case he is, I invite him to take a look around.  Finally,

My econjobrumors insider tells me that its countless trolls are largely frustrated theorists who feel cheated of the respect they think the profession owes them.  Speculation, yes, but speculation born of years of study of their not-so-silent screams.

He is talking about the people who anonymously post sometimes hilarious, sometimes obnoxious vitriol on that outpost of grad student angst known as EJMR. I wonder how he could possibly know the research area of anonymous posters to that web site? Among all the economists who feel cheated out of the respect that they think the profession owes them why would it be that theorists are the most likely to troll?

From a fun little article by Andrew Gelman and Deborah Nolan:

The law of conservation of angular momentum tells us that once the coin is in the air, it spins at a nearly constant rate (slowing down very slightly due to air resistance). At any rate of spin, it spends half the time with heads facing up and half the time with heads facing down, so when it lands, the two sides are equally likely (with minor corrections due to the nonzero thickness of the edge of the coin); see Figure 3. Jaynes (1996) explained why weighting the coin has no effect here (unless, of course, the coin is so light that it floats like a feather): a lopsided coin spins around an axis that passes through its center of gravity, and although the axis does not go through the geometrical center of the coin, there is no difference in the way the biased and symmetric coins spin about their axes.

On the other hand, a weighted coin spun on a table will show a bias for the weighted side.  The article describes some experiments and statistical tests to use in the classroom.  There are some entertaining stories too.  Like how the King of Norway avoided losing the entire Island of Hising to the King of Sweden by rolling a 13 with a pair of dice (“One die landed six, and the other split in half landing with both a six and a one showing.”)

Visor volley:  Toomas Hinnosaar.

I wrote last week about More Guns, Less Crime.  That was the theory, let’s talk about the rhetoric.

Public debates have the tendency to focus on a single dimension of an issue with both sides putting all their weight behind arguments on that single front.  In the utilitarian debate about the right to carry concealed weapons, the focus is on More Guns, Less Crime. As I tried to argue before, I expect that this will be a lost cause for gun control advocates.  There just isn’t much theoretical reason why liberalized gun carry laws should increase crime.  And when this debate is settled, it will be a victory for gun advocates and it will lead to a discrete drop in momentum for gun control (that may have already happened.)

And that will be true despite the fact that the real underlying issue is not whether you can reduce crime (after all there are plenty of ways to do that,) but at what cost.  And once the main front is lost, it will be too late for fresh arguments about externalities to have much force in public opinion.  Indeed, for gun advocates the debate could not be more fortuitously framed if the agenda were set by a skilled debater.  A skilled debater knows the rhetorical value of getting your opponent to mount a defense and thereby implicitly cede the importance of a point, and then overwhelming his argument on that point.

Why do debates on inherently multi-dimensional issues tend to align themselves so neatly on one axis?  And given that they do, why does the side that’s going to lose on those grounds play along?  I have a theory.

Debate is not about convincing your opponent but about mobilizing the spectators.  And convincing the spectators is neither necessary nor sufficient for gaining momentum in public opinion.  To convince is to bring others to your side.  To mobilize is to give your supporters reason to keep putting energy into the debate.

The incentive to be active in the debate is multiplied when the action of your supporters is coordinated and when the coordination among opposition is disrupted.  Coordinated action is fueled not by knowledge that you are winning the debate but by common knowledge that you are winning the debate.  If gun control advocates watch the news after the latest mass killing and see that nobody is seriously representing their views, they will infer they are in the minority and give up the fight even if in fact they are in the majority.

Common knowledge is produced when a publicly observable bright line is passed.  Once that single dimension takes hold in the public debate it becomes the bright line:  When the dust is settled it will be common knowledge who won. A second round is highly unlikely because the winning side will be galvanized and the losing side demoralized.  Sure there will be many people, maybe even most, who know that this particular issue is of secondary importance but that will not be common knowledge.  So the only thing to do is to mount your best offense on that single dimension and hope for a miracle or at least to confuse the issue.

(Real research idea for the vapor mill.  Conjecture:  When x and y are random variables it is “easier” to generate common knowledge that x>0 than to generate common knowledge that x>y.)

Chickle:  Which One Are You Talking About? from www.f1me.net.

David Mitchell is a stammerer who wrote beautifully about it in his semi-autobiographical novel Black Swan Green. Here is Mitchell on The King’s Speech.  In the article he talks about his own strategies for coping with stammering:

If these technical fixes tackle the problem once it’s begun, “attitudinal stances” seek to dampen the emotions that trigger my stammer in the first place. Most helpful has been a sort of militant indifference to how my audience might perceive me. Nothing fans a stammer’s flames like the fear that your listener is thinking “Jeez, what is wrong with this spasm-faced, eyeball-popping strangulated guy?” But if I persuade myself that this taxing sentence will take as long as it bloody well takes and if you, dear listener, are embarrassed then that’s your problem, I tend not to stammer. This explains how we can speak without trouble to animals and to ourselves: our fluency isn’t being assessed. This is also why it’s helpful for non-stammerers to maintain steady eye contact, and to send vibes that convey, “No hurry, we’ve got all the time in the world.”

(Gat Gape:  The Browser) Incidentally, I watched The King’s Speech and also True Grit on a flight to San Francisco Sunday night while the Oscars were being handed out down below. I enjoyed the portrayal of stammering in TKS but unlike Mitchell I didn’t think that subject matter alone carried an entire film.  And there wasn’t much else to it.  (And by the way here is Christopher Hitchens complaining about the softie treatment of Churchill and King Edward VIII.)

True Grit was also a big disappointment.  I haven’t seen Black Swan but I hear it has some great kung fu scenes.

Whenever I teach the Vickrey auction in my undergraduate classes I give this question:

We have seen that when a single object is being auctioned, the Vickrey  (or second-price) auction ensures that bidders have a dominant strategy to bid their true willingness to pay. Suppose there are k>1 identical objects for sale.  What auction rule would extend the Vickrey logic and make truthful bidding a dominant strategy?

Invariably the majority of students give the intuitive, but wrong answer.  They suggest that the highest bidder should pay the second-highest bid, the second-highest bidder should pay the third-highest bid, and so on.

Did you know that Google made the same mistake?  Google’s system for auctioning sponsored ads for keyword searches is, at its core, the auction format that my undergraduates propose (plus some bells and whistles that account for the higher value of being listed closer to the top and Google’s assessment of the “quality” of the ads.)  And indeed Google’s marketing literature proudly claims that it “uses Nobel Prize-winning economic theory.”  (That would be Vickrey’s Nobel.)

But here’s the remarkable thing.  Although my undergraduates and Google got it wrong, in a seemingly miraculous coincidence, when you look very closely at their homebrewed auction, you find that it is not very different at all from the (multi-object) Vickrey mechanism.  (In case you are wondering, the correct answer is that all of the k highest bidders should pay the same price: the k+1st highest bid.)

In a famous paper, Edelman, Ostrovsky and Schwarz (and contempraneously Hal Varian) studied the auction they named The Generalized Second Price Auction (GSPA) and showed that it has an equilibrium in which bidders, bidding optimally, effectively undo Google’s mistaken rule and restore the proper Vickrey pricing schedule.  It’s not a dominant strategy, but it is something pretty close:  if everyone bids this way no bidder is going to regret his bid after the auction is over. (An ex post equilibrium.)

Interestingly this wasn’t the case with the old style auctions that were in use prior to the GSPA.  Those auctions were based on a first-price model in which the winners paid their own bids.  In such a system you always regret your bid ex post because you either bid too much (anything more than your opponents’ bid plus a penny is too much) or too little.  Indeed, advertisers used software agents to modify their standing bids at high-frequencies in order to minimize these mistakes.  In practice this meant that auction outcomes were highly volatile.

So the Google auction was a happy accident.  On the other hand, an auction theorist might say that this was not an accident at all.  The real miracle would have been to come up with an auction that didn’t somehow reduce to the Vickrey mechanism.  Because the revenue equivalence theorem says that the exact rules of the auction matter only insofar as they determine who the winners are.  Google could use any mechanism and as long as its guaranteed that the bidders with the highest values will win, that can be accomplished in an ex post equilibrium with the bidders paying exactly what they would have paid in the Vickrey mechanism.

If you’ve ever sat down at a pub to a plate of really good fish and chips—the kind in which the fish stays tender and juicy but the crust is supercrisp—odds are that the cook used beer as the main liquid when making the batter. Beer makes such a great base for batter because it simultaneously adds three ingredients—carbon dioxide, foaming agents and alcohol—each of which brings to bear different aspects of physics and chemistry to make the crust light and crisp.

The CO2 escaping from the frying batter makes for a light texture.  This effect is enhanced by the low surface tension, (which in the glass makes the foamy head), keeping the bubbles in place for the duration of the cooking process.  And the alcohol evaporates faster than water so that the crust sets quickly reducing the risk of overcooking.  The story is in Scientific American.

I was walking along, and I saw just this hell of a big moose turd, I mean it was a real steamer! So I said to myself, “self, we’re going to make us some moose turd pie.” So I tipped that prairie pastry on its side, got my sh*t together, so to speak, and started rolling it down towards the cook car: flolump, flolump, flolump. I went in and made a big pie shell, and then I tipped that meadow muffin into it, laid strips of dough across it, and put a sprig of parsley on top. It was beautiful, poetry on a plate, and I served it up for dessert.

Here’s one of the thorniest incentive problems known to man.  In an organization there is a job that has to be done.  And not just anybody can do it well, you really need to find the guy who is best at it.  The livelihood of the organization depends on it.  But the job is no fun and everyone would like to get out of doing it.  To make matters worse, performance is so subjective that no contract can be written to compensate the designee for a job well done.

The core conflict is exemplified in a story by Utah Phillips about railroad workers living out in the field as they work to level the track.  Someone has to do the cooking for the team and nobody wants to do it.  Lacking any better incentive scheme they went by the rule that if you complained about the food then from now on you were going to have to do the cooking.

You can see the problem with this arrangement.  But is there any better system?  You want to find the best cook but the only way to reward him is to relieve him of the job.  That would be self defeating even if you could get it to work.  You probably couldn’t because who would be willing to say the food was good if it meant depriving themselves of it the next time?

A simple rotation scheme at least has the benefit of removing the perverse incentive.  Then on those days when the best cook has the job we can trust that he will make a good meal out of his own self interest.  He might even volunteer to be the cook.

But it might be optimal to rule out volunteering too.  Because that could just bring back the original incentive problem in a new form.  Since ex ante nobody knows who the best cook is, everyone will set out to prove that they are incapable of making a palatable meal so that the one guy who actually can cook, whoever he is, will volunteer.

It may help to keep the identity of the cook secret.  Then when a capable cook actually has the job he can feel free to make a good meal without worrying that he will be recruited permanently.  It will also lower the incentive for the others to make a bad meal because nobody will know who to exclude in the future.

Even if there is no scheme that really solves the incentive problem, the freedom to complain is essential for organizational morale.

Well, this big guy come into the mess car, I mean, he’s about 5 foot forty, and he sets himself down like a fool on a stool, picked up a fork and took a big bite of that moose turd pie. Well he threw down his fork and he let out a bellow, “My God, that’s moose turd pie!”

“It’s good though.”

  1. Watson v Pynchon v McCarthy
  2. When chivalry was still alive, you would never put a bag over her head.  A gentleman would use a basket instead.
  3. However there was one catch, he would have to be awake and playing the banjo whilst under the knife.” (with video.)
  4. PDF of junior Ghaddafi’s PhD thesis from the LSE.

My duaghter’s 4th grade class read the short story “The Three Questions” by Tolstoy (a two minute read.)  This afternoon I led a discussion of the story. Here are my notes.

There is a King who decides that he needs the answers to three questions.

  1. What is the best time to act?
  2. Who is the most important person to listen to?
  3. What is the most important thing to do?

Because if he knew the answers to these questions he would never fail in what he set out to do.  He sends out a proclomation in his Kingdom offering a reward to anyone who can answer these questions but he is disappointed because although many offer answers…

All the answers being different, the King agreed with none of them, and gave the reward to none.

So instead he went to see a hermit who lived alone in the Wood and who might be able to answer his questions.  The King and the hermit spend the day in silence digging beds in the ground.  Growing impatient, the King confronts the hermit and makes one final request for the answers to the King’s questions.  But before the hermit is able to respond they are interrupted by a wounded stranger who needs their help.  They bandage the stranger and lay him in bed and the King himself falls asleep and does not awake until the next morning.

At it turns out the stranger was intending to murder the King but was caught by the King’s bodyguard and stabbed.  Unkowingly the King saved his enemy’s life and now the man was eternally grateful and begging for the King’s forgiveness. The King returns to the hermit and asks again for the answers to his questions.

“Do you not see,” replied the hermit. “If you had not pitied my weakness yesterday, and had not dug those beds for me, but had gone your way, that man would have attacked you, and you would have repented of not having stayed with me. So the most important time was when you were digging the beds; and I was the most important man; and to do me good was your most important business. Afterwards when that man ran to us, the most important time was when you were attending to him, for if you had not bound up his wounds he would have died without having made peace with you. So he was the most important man, and what you did for him was your most important business. Remember then: there is only one time that is important– Now! It is the most important time because it is the only time when we have any power. The most necessary man is he with whom you are, for no man knows whether he will ever have dealings with any one else: and the most important affair is, to do him good, because for that purpose alone was man sent into this life!”

We are left to decide for ourselves what the King will do with these answers. The King abhors uncertainty. This is why he discarded the many different answers given by the learned men in his Kingdom. The simplicity of the hermit’s advice is bound the appeal to the King. It is certainly a rule that can be applied in any situation. And it is indeed motivated by acknowledgement of uncertainty in the extreme.  The Here and Now are the only certainties. And it follows from uncertainty about where you will be in the future, with whom you will be, and what options will be before you that the Here and Now are the most important.

(The hermit is not only outlining a foundation for hyperbolic discounting, but also a Social Welfare analog.  Your social welfare function should heavily discount all people except those who are before you right now.)

But what would come of the King were he to follow the advice of the hermit? Imagine what it would be like to live like that. Would you ever even make it to the bathroom to brush your teeth? How many opportunities and people would distract you along the way?

If the hermit’s advice were any good then surely the hermit himself must follow it. Perhaps the hermit was a King once.

You probably know the Ellsberg urn experiment. In the urn on the left there are 50 black balls and 50 red balls. In the urn on the right there are 100 balls, some of them red and some of them black. No further information about the urn on the right is given. Subjects are allowed to pick an urn and bet on a color. They win $1 if the ball drawn from the urn they selected is the color they bet.

Subjects display aversion to ambiguity: they strictly prefer to bet on the left urn where the odds are known than on the right urn where the odds are unknown. This is known as Ellsberg’s paradox, because whatever probabilities you attach to the distribution of balls in the right urn, there is a color you could bet on and do at least as well as the left urn. This experiment revealed a new dimension to attitudes towards uncertainty that has the potential to explain many puzzles of economic behavior. (The most recent example being the job-market paper of Gharad Bryan from Yale who studies the extent to which ambiguity can explain insurance market failures in developing countries.)

Decades and thousands of papers on the subject later, there remains a famous critique of the experiment and its interpretation due to Raiffa. The subjects could “hedge” against the ambiguity in the right urn by tossing a coin to decide whether to bet on red or black. To see the effect of this note that if there are n black balls and (100-n) red balls, then the coin toss means that with 50% probability you bet on black and win with n% probability and with 50% probability you bet you red and with with (100-n)% probability giving you a total probability of winning equal to 50%. Exactly the same odds as the left urn no matter what the actual value of n is. Given this ability to remove ambiguity altogether, the choices of the subjects cannot be interpreted as having anything to do with ambiguity aversion.

Kota Saito begins with the observation that the Raiffa randomization is only one of two ways to remove the ambiguity from the right urn. Another way is to randomize ex post. Hypothetically: first draw the ball, observe its color, and then toss a coin to decide whether to bet on red or black. Like the ex-ante coin tossing, this strategy guarantees that you have a 50% chance of winning. Kota points out that theories that formalize ambiguity assume that these two strategies are viewed equivalently by decision-makers. If a subject is ambiguity averse, then he prefers either form of randomization to the right urn and he views either of them as indifferent to the left urn.

But the distinct timing makes them conceptually different. In the ex ante case, after the coin is tossed and you decide to bet on red, say, you still face ambiguity going forward just as you would have if you chosen to bet on red without tossing a coin. In the ex post case, all of the ambiguity is removed once you have decided how to bet. (There is an old story about Mark Machina’s mom that relates to this. See example 2 here.)

Kota disentangles these objects and models a decision-maker who may have distinct attitudes to these two ways of mixing objective randomization with subjectively uncertain prospects. In particular he weakens the standard axiom which requires that the order of uncertainty resolution doesn’t matter to the decision-maker. With this weaker assumption he is able to derive an elegant model in which a single parameter encodes the decision-makers pattern of ambiguity attitudes. Interestingly, the theory implies that certain patterns will not arise. For example, any decision-maker who satisfies Kota’s axioms and who displays neutrality toward ex post ambiguity must also display neutrality toward ex ante ambiguity. All other patterns are possible. As it happens, this is exactly what is found in an experimental study by Dominiak and Shnedler.

What’s really cool about the paper is that Kota uses exactly the same setup and axioms to derive a theory of fairness in the presence of randomization. A basic question is whether the “fair” thing to do is to toss a coin to decide who gets a prize, or to give each person an identical, independent lottery. Compared to theories of uncertainty attitudes, our models of preferences for fairness are much less advanced and have barely touched on this kind of question. Kota’s model brings that literature very far very fast.

Kota is a Northwestern Phd (to be) who just defended his dissertation today. You could call it a shotgun defense because Kota’s job market was highly unusual. As a 4th year student he was not planning to go on the market until next year, but CalTech discovered him and plucked him off the bench and into the Big Leagues. He starts as an Assistant Professor there in the Fall. Congratulations Kota!

Believe it or not that line of thinking does lie just below the surface in many recruiting discussions.  The recruiting committee wants to hire good people but because the market moves quickly it has to make many simultaneous offers and runs the risk of having too many acceptances.  There is very often a real feeling that it is safe to make offers to the top people who will come with low probability but that its a real risk to make an offer to someone for whom the competition is not as strong and who is therefore likely to accept.

This is not about adverse selection or the winner’s curse.  Slot-constraint considerations appear at the stage where it has already been decided which candidates we like and all that is left is to decide which ones we should offer.  Anybody who has been involved in recruiting decisions has had to grapple with this conundrum.

But it really is a phantom issue.  It’s just not possible to construct a plausible model under which your willingness to make an offer to a candidate is decreasing in the probability she will come.  Take any model in which there is a (possibly increasing) marginal cost of filling a slot and candidates are identified by their marginal value and the probability they would accept an offer.

Consider any portfolio of offers which involves making an offer to candidate F. The value of that portfolio is a linear function of the probability that F accepts the offer.  For example, consider making offers to two candidates F and O.  The value of this portfolio is

q_O [ q_F (v_F +v_O - C(2))+( 1 - q_F )(v_O - C(1) ) ]

+(1 -q_O)[q_F v_F-C(1)]

where q_O and q_F are the acceptance probabilities, v_O and v_F are the values and C(\cdot) is the cost of hiring one or two candidates in total.  This can be re-arranged to

q_F \left[q_O\left(v_F-MC(2)\right)+(1- q_O) \left(v_F - C(1)\right) \right] + const.

where MC(2) = C(2) - C(1) is the marginal cost of a second hire.  If the bracketed expression is positive then you want to include F in the portfolio and the value of doing so only gets larger as q_F increases. (note to self:  wordpress latex is whitespace-hating voodoo)

In particular, if F is in the optimal portfolio, then that remains true when you raise q_F.

It’s not to say that there aren’t interesting portfolio issues involved in this problem.  One issue is that worse candidates can crowd out better ones.  In the example, as the probability that F accepts an offer, q_F, increases you begin to drop others from the portfolio.  Possibly even others who are better than F.

For example, suppose that the department is slot-constrained and would incur the Dean’s wrath if it hired two people this year.  If v_O > v_F so that you prefer candidate O, you will nevertheless make an offer only to F if q_F is very high.

In general, I guess that the optimal portfolio is a hard problem to solve.  It reminds me of this paper by Hector Chade and Lones Smith.  They study the problem of how many schools to apply to, but the analysis is related.

What is probably really going on when the titular quotation arises is that factions within the department disagree about the relative values of F and O.  If F is a theorist and O a macro-economist, the macro-economists will foresee that a high q_F means no offer for O.

Another observation is that Deans should not use hard offer constraints but instead expose the department to the true marginal cost curve, understanding that the department will make these calculations and voluntarily ration offers on its own.  (When q_F is not too high, it is optimal to make offers to both and a hard offer constraint prevents that.)

If you were the President of the United States and, on camera, you were asked

If there is an uprising in Saudi Arabia the likes of which we have seen in Tunisia, Egypt, and Libya, will the United States stand by the protestors and ask the regime to leave?

how would you answer?

The Texas legislature is on the verge of passing a law permitting concealed weapons on University campuses, including the University of Texas where just this Fall my co-author Marcin Peski was holed up in his office waiting out a student who was roaming campus with an assault rifle.

This post won’t come to any conclusions, but I will try to lay out the arguments as I see them.  More guns, less crime requires two assumptions.  First, people will carry guns to protect themselves and second, gun-related crime will be reduced as a result.

There are two reasons that crime will be reduced: crime pays off less often, and sometimes it leads to shooting. In a perfect world, a gun-toting victim of a crime simply brandishes his gun and the criminal walks away or is apprehended and nobody gets hurt.  In that perfect world the decision to carry a gun is simple.  If there is any crime at all you should carry a gun becuase there are no costs and only benefits.  And then the decision of criminals is simple too:  crime doesn’t pay because everyone is carrying a gun.

(In equilibrium we will have a tiny bit of crime, just enough to make sure everyone still has an incentive to carry their guns.)

But the world is not perfect like that and when a gun-carrying criminal picks on a gun-carrying victim, there is a chance that either of them will be shot.  This changes the incentives.  Now your decision to carry a gun is a trade-off between the chance of being shot versus the cost of being the victim of a crime.  The people who will now choose to carry guns are those for whom the cost of being the victim of a crime outweigh the cost of an increased chance of getting shot.

If there are such people then there will be more guns.  These additional guns will reduce crime because criminals don’t want to be shot either.  In equilibrium there will be a marginal concealed-weapon carrier.  He’s the guy who, given the level of crime, is just indifferent between being a victim of crime and having a chance of being shot.  Everyone who is more willing to escape crime and/or more willing to face the risk of being shot will carry a gun.  Everyone else will not.

In this equilibrium there are more guns and less crime.  On the other hand there is no theoretical reason that this is a better outcome than no guns, more crime.  Because this market has externalities:  there will be more gun violence.  Indeed the key endogenous variable is the probability of a shootout if you carry a gun and/or commit a crime.  It must be high enough to deter crime.

And there may not be much effect on crime at all.  Whose elasticity with respect to increased probability of being shot is larger, the victim or the criminal?  Often the criminal has less to lose.  To deter crime the probability of a shooting may have to increase by more than victims are willing to accept and they may choose not to carry guns.

There is also a free-rider problem.  I would rather have you carry the gun than me.  So deterrence is underprovided.

Finally, you might say that things are different for crimes like mugging versus crimes like random shootings. But really the qualitative effects are the same and the only potential difference is in terms of magnitudes.  And it’s not obvious which way it goes.  Are random assailants more or less likely to be deterred?  As for the victims, on the one hand they have more to gain from carrying a gun when they are potentially faced with a campus shooter, but if they plan make use of their gun they also face a larger chance of getting shot.

NB:  nobody shot at the guy at UT in September and the only person he shot was himself.

My daughter’s 4th grade class is reading a short story by O. Henry called The Two Thanksgiving Day Gentlemen.  (A two minute read.) In about an hour I will go to her class and lead a discussion of the story.  Here are my notes.

In the story we meet Stuffy Pete.  He is sitting on a bench waiting for a second gentleman to arrive.  We learn that this is an annual meeting on Thanksgiving day that Stuffy Pete is always looking forward to.  Stuffy Pete is a ragged, hungry street-dweller and the gentleman who arrives each year treats him to a Thanksgiving feast.

But on this Thanksgiving, Stuffy Pete is stuffed.  Because on his way to the meeting, he was stopped by the servant of two old ladies who had their own Thanksgiving tradition.  They treated him to an even bigger feast than he is used to.  And so he sits here, weighed down on the bench, terrified of the impending arrival.

The old gentleman arrives and recites this speech.

“Good morning, I am glad to see that the vicissitudes of another year have spared you to move in health about the beautiful world.  For that blessing alone this day of thanksgiving is well celebrated.  If you will come with me, my man, I will provide you with a dinner that should be more than satisfactory in every respect.”

The same speech he has recited every year the two gentlemen met on that same bench.  “The words themselves almost formed an institution.”

And Stuffy Pete, in tearful agony at the prospects replies “Thankee sir.  I’ll go with ye, and much obliged.  I’m very hungry sir.”

Stuffy’s Thanksgiving appetite was not his own; it now belonged to this kindly old gentleman who had taken possession of it.

The story’s deep cynicism, hinted at in the preceding quote, is only fully realized in the final paragraphs which contain the typical O. Henry ironic twist.  Stuffy, overstuffed by a second Thanksgiving feast collapses and is brought to hospital by an ambulance whose driver “cursed softly at his weight.” Shortly thereafter he is joined there by the old gentleman and a doctor is overheard chatting about his case

“That nice old gentleman over there, now” he said “you wouldn’t think that was a case of almost starvation.  Proud old family, I guess.  He told me he hadn’t eaten a thing for three days.”

Social norms and institutions re-direct self-interested motives.  Social welfare maximization is then proxied for by individual-level incentives.  But they can take on a life of their own, uncoupled from their origin.  This is the folk public choice theory of O. Henry’s staggeringly cynical fable.

Check out Michael Chwe’s book Folk Game Theory:  Strategic Analysis in Austen, Hammerstain, and African American Folk Tales. It’s a study of game theory in the context of literature and of literature through the lens of game theory.  But it’s more than that.  Each of the stories in the book illustrates what happens out of equilibrium.

The fox gets tricked by the rabbit because the fox has not understood the strategic motivation behind the rabbit’s actions.  The master pays the price when he underestimates the slave.  A folk tale is an artificially constructed scenario which purposefully takes the characters off the equilibrium path in order to teach us to stay on it.

By recovering a “people’s history of game theory” and gaining a larger understanding of its past, we enlarge its potential future. Game theory’s mathematical models are sometimes criticized for assuming ahistorical, decontextualized actors, and indeed game theory is typ- ically applied to relatively “neutral” situations such as auctions and elections. Folk game theory shows that game theory can most inter- estingly arise in situations which are strongly gendered or racialized, with clear superiors and subordinates. By looking at slave folktales, we can see how the story of Flossie and the Fox is a sophisticated discus- sion of deterrence. We can see from Austen’s heroine Fanny Price that social norms, far from protecting sociality against the corrosive forces of individualism, can be the first line of oppression. We can see from Hammerstein’s Ado Annie how convincing others of your impulsive- ness can open up new strategic opportunities. Folk game theory has wisdom which can be explored just as traditional folk medicines are now investigated by pharmaceutical companies.

By asking a hand-picked team of 3 or 4 experts in the field (the “peers”), journals hope to accept the good stuff, filter out the rubbish, and improve the not-quite-good-enough papers.

…Overall, they found a reliability coefficient (r^2) of 0.23, or 0.34 under a different statistical model. This is pretty low, given that 0 is random chance, while a perfect correlation would be 1.0. Using another measure of IRR, Cohen’s kappa, they found a reliability of 0.17. That means that peer reviewers only agreed on 17% more manuscripts than they would by chance alone.

That’s from neuroskeptic writing about an article that studies the peer-review process.  I couldn’t tell you what Cohen’s kappa means but let’s just take the results at face value:  referees disagree a lot.  Is that bad news for peer-review?

Suppose that you are thinking about whether to go to a movie and you have three friends who have already seen it.  You must choose in advance one or two of them to ask for a recommendation.  Then after hearing their recommendation you will decide whether to see the movie.

You might decide to ask just one friend.  If you do it will certainly be the case that sometimes she says thumbs-up and sometimes she says thumbs-down. But let’s be clear why.  I am not assuming that your friends are unpredictable in their opinions.  Indeed you may know their tastes very well.  What I am saying is rather that, if you decide to ask this friend for her opinion, it must be because you don’t know it already. That is, prior to asking you cannot predict whether or not she will recommend this particular movie.  Otherwise, what is the point of asking?

Now you might ask two friends for their opinions.  If you do, then it must be the case that the second friend will often disagree with the first friend.  Again, I am not assuming that your friends are inherently opposed in their views of movies. They may very well have similar tastes. After all they are both your friends. But, you would not bother soliciting the second opinion if you knew in advance that it was very likely to agree or disagree with the first on this particular movie. Because if you knew that then all you would have to do is ask the first friend and use her answer to infer what the second opinion would have been.

If the two referees you consult are likely to agree one way or the other, you get more information by instead dropping one of them and bringing in your third friend, assuming he is less likely to agree.

This is all to say that disagreement is not evidence that peer-review is broken. Exactly the opposite:  it is a sign that editors are doing a good job picking referees and thereby making the best use of the peer-review process.

It would be very interesting to formalize this model, derive some testable implications, and bring it to data. Good data are surely easily accessible.

(Picture:  Right Sizing from www.f1me.net)

You receive an email with a question asking for advice or a suggestion or an opinion.  To give a full answer you would have to take some time to think.  You are a little busy and you would rather not give it too much thought but there is a second consideration that leads you to give the quick and dirty answer right away. The longer you wait the longer they will know you thought about it and the more credence they will give your answer.  Not to mention that more of your reputation will be at stake if you are assumed to have thought carefully.

Still, some issues are important enough to give thought to.  But how much?  The same tradeoff is there, but now the characteristics of the correspondent matter. Every additional second you spend thinking allows you to make a slightly more thoughtful answer but also increases what he expects of you.  If he is very sharp, he will be read your reply and possibly see deeper into the question than you did making you look bad.  The gap only gets bigger the longer you wait. If he is less sharp, every second tilts the balance in your favor.

All of this is predicated on him knowing just how much time you spent on the question.  You want to manipulate this by establishing a reputation for rapid-fire responses.  Then if you wait a day but still give a lousy answer, he will put it down to you just having been busy for day before giving your usual top-of-your-head reply.  Indeed you want everyone to think you are busier than you are.

Then along comes instant messaging, facebook, etc speeding up communications.  You are expected to have seen the message sooner so its harder to pretend you were unavoidably delayed.  On the plus side though now you can more easily commit to being busy.  Just friend everyone.   Your feed is so cluttered up with babble that these really important questions credibly get lost in the shuffle.  He can directly see how overloaded you are.

So the value of your marginal friend is equal to the incremental publicly observed distraction she creates.

In economic theory, the study of institutions falls under the general heading of mechanism design.  An institution is modeled as game in which the relevant parties interact and influence the final outcome.  We study how to optimally design institutions by considering how changes in the rules of the game change the way participants interact and bring about better or worse outcomes.

But when the new leaders in Egypt sit down to design a new constitution for the country, standard mechanism design will not be much help.  That’s because all of mechanism design theory is premised on the assumption that the planner has in front of him a set of feasible alternatives and he is desigining the game in order to improve society’s decision over those alternatives.  So it is perfectly well suited for decisions about how much a government should spend this year on all of the projects before it.  But to design a constitution is to decide on procedures that will govern decisioins over alternatives that become available only in the future, and about which today’s Constitutional Congress knows nothing.

The American Constitutional Congress implicitly decided how much the United States would invest in nuclear weapons before any of them had any idea that such a thing was possible.

Designing a constitution raises a unique set of incentive problems.  A great analogy is deciding on a restaurant with a group of friends.  Before you start deliberating you need to know what the options are.  Each of you knows about some subset of the restaurants in town and whatever procedure the group will use to ultimately decide affects whether or not you are willing to mention some of the restaurants you know about.

Ideally you would like a procedure which encourages everyone to name all the good restaurants they know about so that the group has as wide a set of choices as possible.  But you can’t just indiscriminately reward people for bringing alternatives to the table because that would only lead to a long list of mostly lousy choices.

You can only expect people to suggest good restaurants if they believe that the restaurants they suggest have a chance of being chosen.  And now you have to worry about strategic behavior.  If I know a good Chinese restaurant but I am not in the mood for Chinese, then how are you going to reward me for bringing it up as an option?

When we think about institutions for public decisions, we have to take into account how they impact this strategic problem.  Democracy may not be the best way to decide on a restaurant.  If the status quo, say the Japanese restaurant is your second-favorite, you may not suggest the Mexican restaurant for fear that it will split the vote and ultimately lead to the Moroccan restaurant, your least favorite.

Certainly such political incentives affect modern day decision-making.  Would a better health-care proposal have materialized were it not for fear of what it would be turned into by the political sausage mill?

  1. It’s apparent that nobody in your organization has enough influence over Phil Lesh to evoke anything resembling normal behaviour.
  2. 30 sows and pigs. (say that fast with a lisp.)
  3. Malcom Gladwell book generator.
  4. What makes Nancy so great, according to Sidney.
  5. Rooster justice.

Following up on the Trivers-Willard hypothesis.  The evidence is apparently that promiscuity, a trait that confers more reproductive advantage on males than females, is predictive of a greater than 50% probability of male offspring.  A commenter claimed that there is a bias in favor of male offspring when the mother is impregnated close to ovulation and wondered whether the study controlled for that.  A second commenter pointed out that there is no reason to control for that because that may be exactly the channel through which the Trivers-Willard effect works.

So now put yourself in the shoes of the intelligent designer. Suppose you are given that promiscuity is such a trait.  You are given control over the male-female proportion of offspring and you are designing the female of the species.  What you want to do is program her to have male offspring when she mates with a promiscuous male.  But you cannot micromanage because there is no way to condition this directly on the promiscuity of the mate.  The best you can do is vary the sex proportions conditional on biological signals, for example the date in the cycle.

How would you do this?  Of all the “states of the system” that you can condition on, you would find the one such that conditional on having sex in that state, the relative likelihood that her partner was the promiscuous type was maximized.  You would program her to increase the proportion of male offspring in those states.

Is sex close to ovulation such a signal?  I don’t see why.  But we could think of some that would qualify.  How about the signal that he is delivering a small quantity of sperm?  The encounter lasted longer than usual, this is the first time she had sex in a while, these sperm have not been seen before, etc…

(Regular readers of this blog will know I consider that a good thing.)

The fiscal multiplier is an important and hotly debated measure for macroeconomic policy. If the government spends an additional dollar, a dollar’s worth of output is produced, but in addition the dollar is added to disposable income of the recipients who then spend some fraction of it. More output is produced, etc.

It’s hard to measure the multiplier because observed increases in government spending are endogenous and correlated with changes in output for reasons that have nothing to do with fiscal stimulus.

Daniel Shoag develops an instrument which isolates a random component to state-level government spending changes.

Many US states manage pensions which are defined-benefit plans. Defined benefits means that retirees are guaranteed a certain benefit level. This means that the state government bears all of the risk from the investments of these pension funds. Excess returns from these funds are unexpected exogenous windfalls to state spending budgets.

With this instrument, Daniel estimates that an additional dollar of state government spending increases income in the state by $2.12. That is a large multiplier.

The result must be interpreted with some caveats in mind. First, state spending increases act differently than increases at the national level where general equilibrium effects on prices and interest rates would be larger. Second, these spending increases are funded by windfall returns. The effects are likely to be different than spending increases funded by borrowing which crowds out private investment.

According to the Trivers-Willard Hypothesis, individuals possessing a trait which improves the reproductive success of males more than females will be more likely to give birth to male offspring than to female offspring.  I came across a study that claims to support the hypothesis where the trait in question is promiscuity.

Our analyses of two large nationally representative samples, from the General Social Survey in 1994 and the National Longitudinal Study of Adolescent Health, confirm this prediction.  Controlling for a large number of social demographic factors that might be expected independently to influence offspring sex ratios, unrestricted sociosexual orientation significantly increases the odds that the first child is a boy.  One standard deviation increase in the unrestrictedness of sociosexual orientation increases the odds of having a son by 12-19%.

That seems like a large effect, if true.  Chullo chuck:  Barking Up The Wrong Tree.

I am a postalite, and not a blitzer. I am not good at coming up with ideas on demand and in the moment.  Pretty much all of my work is based on waiting around passively for an idea to pop into my head.  It happens in the shower, in the car, at 4:00 in the morning when I am awakened by the snow plow, etc. Whenever that happens I immediately send it to myself in an email.

Then a complicated system of tubes that I have set up routes that email to my Google Buzz feed, then my other blog, and finally to a mail folder where I keep all of these ideas archived.  A typical auto-email looks like this:

—– Original Message ——–

Subject: Album of the summer
Date: Wed, 18 Aug 2010 10:02:56 -0700
From: jeffrey ely <jeffely@northwestern.edu>
To: post@cheepthoughts.posterous.com
Tmbg flood

Grownup now sounds like kids

Snorkeling

You're such a great Parker daddy

Guy teaching daughter to surf

The day it all came together

Old man learning new tricks

Which has the bare outline of a post I never wrote about my experiences surfing this past summer, and being an old man learning new things at the same time my kids are learning new things.

I send myself about 4 of these per day on average, although most of them are just links to stuff on the web I might write about.  People also send me links and I just forward them right away.

Every morning I go through my archive to find something I want to write about (If people have liked it on Google Buzz then I know its a go for the blog.)  I try to keep it in mind all day and work on what I can say about it and how to write it.  This allows me to take advantage of idle moments throughout the day to do most of the writing.  Ideally, by the time I sit down to write (usually just before going to bed at night) I have a good outline and the only thing left to do is find the right words.

(Ron Siegel took this photo of the car cemetery on Chicago’s Lake Shore Drive in last week’s blizzard.)

Celebrity-Yogis like Bikram Choudry claim copyright of certain sequences of Yoga postures.  That’s kinda like Wilt Chamberlain patenting his favorite, ehem, postures; but while the Kama Sutra has yet to establish priority, India’s Traditional Knowledge Library is placing all known yoga asanas into the public domain.

Nine well known yoga institutions in India have helped with the documentation. “The data will be up online in the next two months. In the first phase, we have videographed 250 ‘asanas’ — the most popular ones. Chances of misappropriation with them are higher. So if somebody wants to teach yoga, he does not have to fight copyright issues. He can just refer to the TKDL. At present, anybody teaching Hot Yoga’s 26 postures has to pay Choudhury franchisee fee because he holds copyright on them,” Dr Gupta added.

Fez flip:  BoingBoing