You are currently browsing jeff’s articles.

Apple’s latest response to the iPhone 4 antenna issue:

Upon investigation, we were stunned to find that the formula we use to calculate how many bars of signal strength to display is totally wrong. Our formula, in many instances, mistakenly displays 2 more bars than it should for a given signal strength. For example, we sometimes display 4 bars when we should be displaying as few as 2 bars. Users observing a drop of several bars when they grip their iPhone in a certain way are most likely in an area with very weak signal strength, but they don’t know it because we are erroneously displaying 4 or 5 bars. Their big drop in bars is because their high bars were never real in the first place.

Apple will soon be releasing a software update that will fix the problem by lowering the number of bars displayed on your phone.  In related news, in response to my students’ grade groveling I have re-examined the midterm and noticed that everyone’s score was 5 points higher than it should have been.  The curve has been re-calculated.

It gets harder and harder to avoid learning the outcome of a sporting event before you are able to get home and watch it on your DVR.  You have to stop surfing news web sites, stay away from Twitter, and be careful which blogs you read.  Even then there is no guarantee.  Last year I stopped to get a sandwich on the way home to watch a classic Roddick-Federer Wimbledon final (16-14 in the fifth set!) and some soccer-moms mercilessly tossed off a spoiler as an intermezzo between complaints about their nannys.

No matter how hard you try to avoid them, the really spectacular outcomes are going to find you.  The thing is, once you notice that you realize that even the lack of a spoiler is a spoiler. If the news doesn’t get to you, then at the margin that makes it more likely that the outcome was not a surprise.

Right now I am watching Serena Williams vs Yet-Another-Anonymous-Eastern-European and YAAEE is up a break in the first set.  But I am still almost certain that Serena Williams will win because if she didn’t I probably would have found out about it already.

This is not necessarily a bad thing.  Unless the home team is playing, a big part of the interest in sports is the resolution of uncertainty.  We value surprise. Moving my prior further in the direction of certainty has at least one benefit: In the event of an upset I am even more surprised.  This has to be taken into account when I decide the optimal amount of effort to spend trying to avoid spoilers.  It means that I should spend a little less effort than I would if I was ignoring this compensating effect.

It also tells me something about how to spend that effort.  I once had a match spoiled by the Huffington Post.  I never expected to see sports news there, but ex post I should have known that if HP is going to report anything about tennis it is going to be when there was an upset.  You won’t see “Federer wins again” there.

Finally, if you really want to keep your prior and you recognize the effects above, then there is one way to generate a countervailing effect.  Have your wife watch first and commit to a random disclosure policy.  Whenever the favorite won, then with probability p she informs you and with probability 1-p she reveals nothing.

Tyler Cowen forwards an email sent by a loyal reader disputing the argument that governments should borrow and spend more when interest rates are low.

But assume that the U.S. borrows an extra trillion of dollars now, due in 10 years (the average debt duration of the U.S. debt is something like 4 years?). Sure, the interest rate is low, but the borrowing is cheap only as long as we assume that during the 10 years the U.S. repays this whole extra debt, compared to what would have happened in the baseline world.

This does not affect the argument in any way.  The economic argument for borrowing when interest rates are low says this.  Suppose you have a plan for the future about when you will do your spending, borrowing, and repayment.  This plan is predicated on your expectations of the path of interest rates. Now suppose that, as a surprise, interest rates are lower today than you expected.  Then, other things equal, your original plan is no longer optimal.  You should re-adjust and borrow more today.

The operative word here is “more.”  I did not write “borrow a lot today.”  And in fact the conclusion could be that you don’t borrow at all because if the original plan was to make re-payments, then “borrowing more” means (on net) just repaying less.

There is nothing at all deep about the economics here.  And in fact, its rare that there is much deep economics involved when the economics really matters. Economics is really, really easy.  What is hard is to use economics faithfully in your rhetoric.  Advocates of increased borrowing and spending don’t ever refer to the default plan from which we should be adjusting.  And without that (and the default plan probably doesn’t really exist) there isn’t much economics behind the rhetoric.

All sides are guilty.  Tyler’s reader should be saying “the price of funds is determined by the path of interest rates, not just their value now and therefore this mutes to some degree the effect on borrowing of a drop in interest rates.”  This is another very simple economic point.  But it’s hard to resist the temptation to distort it from a simple comparative statement to one that is absolute.

It’s a variation on the old coordinated attack problem or Rubinstein’s electronic mail game.  But this one is much simpler and even more surprising.  It is due to my colleague Jakub Steiner and his co-author Colin Stewart.

Two generals, you and me, have to coordinate an attack on the enemy.  An attack will succeed only if we both attack at the same time and if the enemy is vulnerable.

From my position I can directly observe whether the enemy is vulnerable.  You on the other hand must send a scout and he will return at some random time. We agree that once you learn that the enemy is vulnerable, you will send a pigeon to me confirming that an attack should commence.  It will take your pigeon either one day or two to complete the trip.

Suppose that indeed the enemy is vulnerable, I observe that is the case, and on day n your pigeon arrives informing me that you know it too.  I am supposed to attack.  But will I?

Since you sent a pigeon I know that you know that the enemy is vulnerable.  But what day did you send your pigeon?  It could be either n-1 or n-2.  Suppose it was n-1, i.e. the pigeon arrived in one day.  Then you don’t know for sure that the pigeon has arrived yet.  So you don’t know that I know that you know that the enemy is vulnerable.  And that means you can’t be certain that I will attack so you will not attack.  And now since I cannot rule out that you sent the pigeon on day n-1, and if that was indeed the date you sent it you will not attack, then I will not attack either.

Thus, an attack will not occur the day I receive the pigeon.  In a certain sense this is obvious because only I know what day I receive the pigeon.  But the surprising thing is that there is no system we can use to decide the date of an attack and have it be successful.

Suppose that we have decided on some system and according to that system I am supposed to attack on date k.  What must be true for me to actually be willing to follow through?  First, I must expect you to be attacking too.  And since you will only attack if you know that the enemy is vulnerable, I will only attack if I have received your pigeon confirming that you know.

But that is not enough.  You will only attack if you know that I will attack and we just argued that this requires that I know that you know that the enemy is vulnerable.  So you will attack only if you know that I have received your pigeon.  You can only be sure of this 2 days after you sent it.  And since I need to be sure you will attack, I will only attack if I received the pigeon yesterday or earlier so that I am sure that you sent it at least 2 days ago and are therefore sure that I have already received it.

But that is still not enough.  Since we have just argued that I will only attack if I received your pigeon at least 1 day ago, you can only be certain that I will attack if you sent your pigeon at least 3 days ago.  And that is therefore necessary for you to be prepared to attack.  But now since I will attack only if I am certain that you will attack, I need to be certain that you sent your pigeon at least 3 days ago and that requires that I received your pigeon at least 2 days ago (and not only yesterday.)

This goes on.  In order for me to attack I must know that you know that I know, etc. etc. that the enemy is vulnerable.  And each additional iteration of this requires that the pigeon be sent one day earlier than the previous iteration. Eventually we run out of earlier days because today is day k.  This means that I will not attack because I cannot be sure that you are sure that (iterate k times) that the enemy is vulnerable.

An eternal puzzle is how a husband/father handles visits by his mother without agonizing conflict between the wife and her mother-in-law.  Here is my Machiavellian solution.  The husband should engineer a conflict with his mother that puts him in the wrong.  Then the wife and her mother-in-law will naturally bond in the face of a mutual enemy.  Don’t forget the key condition that the crime has to be egregious enough so the wife does not come to your defense.  This is why the conflict should not be with the wife:  your mother, being your mother,  is naturally more inclined to side with you.  Added bonus:  husband is conveniently ostracized!

FIFA experimented with a “sudden-death” overtime format during the 1998 and 2002 World Cup tournaments, but the so-called golden goal was abandoned as of 2006.  The old format is again in use in the current World Cup, in which a tie after the first 90 minutes is followed by an entire 30 minutes of extra time.

One of the cited reasons for reverting to the old system was that the golden goal made teams conservative. They were presumed to fear that attacking play would leave them exposed to a fatal counterattack.  But this analysis is questionable.  Without the golden goal attacking play also leaves a team exposed to the possibility of a nearly-insurmountable 1 goal deficit.  So the cost of attacking is nearly the same, and without the golden goal the benefit of attacking is obviously reduced.

Here is where some simple modeling can shed some light.  Suppose that we divide extra time into two periods.  Our team can either play cautiously or attack.  In the last period, if the game is tied, our team will win with probability p and lose with probability q, and with the remaining probability, the match will remain tied and go to penalties.  Let’s suppose that a penalty shootout is equivalent to a fair coin toss.

Then, assigning a value of 1 for a win and -1 for a loss, p-q is our team’s expected payoff if the game is tied going into the second period of extra time.

Now we are in the first period of extra time.  Here’s how we will model the tradeoff between attacking and playing cautiously.  If we attack, we increase by G the probability that we score a goal.  But we have to take risks to attack and so we also we increase by L the probability that they score a goal.  (To keep things simple we will assume that at most one goal will be scored in the first period of extra time.)

If we don’t attack there is some probability of a goal scored, and some probability of a scoreless first period.  So what we are really doing by attacking is taking an G-sized chunk of the probability of a scoreless first period and turning it into a one-goal advantage, and also a L-sized chunk and turning that into a one-goal deficit.  We can analyze the relative benefits of doing so in the golden goal system versus the current system.

In the golden goal system, the event of a scoreless first period leads to value p-q as we analyzed at the beginning.  Since a goal in the first period ends the game immediately, the gain from attacking is

G - L + (1-G-L)(p-q).

(A chunk of sized G-L of the probability of a scoreless first period is now decisive, and the remaining chunk will still be scoreless and decided in the second period.)  So, we will attack if

p - q \leq G - L + (1 - G - L) (p-q)

This inequality is comparing the value of the event of a scoreless first period p-q versus the value of taking a chunk of that probability and re-allocating it by attacking.  (Playing cautiously doesn’t guarantee a scoreless first period, but we have already netted out the payoff from the decisive first-period outcomes because we are focusing on the net changes G and L to the scoring probability due to attacking.)

Rearranging, we attack if

p - q \leq \frac{G-L}{G+L}.

Now, if we switch to the current system, a goal in the first period is not decisive.  Let’s write y for the probability that a team with a one-goal advantage holds onto that lead in the second period and wins.  With the remaining probability, the other team scores the tying goal and sends the match to penalties.

Now the comparison is changed because attacking only alters probability-chunks of sized yG and yL.  We attack if

p - q \leq Gy - Ly + (1 - G - L) (p-q),

which re-arranges to

p - q \leq y\frac{G-L}{G+L}

and since y < 1, the right-hand side is now smaller.  The upshot is that the set of parameter values (p,q,y,G,L) under which we prefer to attack under the current system is a strictly smaller subset of those that would lead us to attack under the golden goal system.

The golden goal encourages attacking play.  The intuition coming from the formulas is the following.  If p > q, then our team has the advantage in a second period of extra time.  In order for us to be willing to jeopardize some of that advantage by taking risks in the first period, we must win a sufficiently large mass of the newly-created first-period scoring outcomes.  The current system allows some of those outcomes (a fraction 1-y of them) to be undone by a second-period equalizer, and so the current system mutes the benefits of attacking.

And if p<q, then we are the weaker team in extra time and so we want to attack in either case.  (This is assuming G > L.  If G< L then the result is the same but the intuition is a little different.)

I haven’t checked it but I would guess that the conclusion is the same for any number of “periods” of extra time (so that we can think of a period as just representing a short interval of time.)

In Asia the well-to-do avoid the sun (you’ve seen them with their parasols) because fair skin signals that you don’t spend your days outside, working.  In Europe they embrace the sun because a good tan signals that you don’t spend all your time inside, working.

  1. Tomato and Watermelon Soup (Cold)
  2. Marinated Anchovy “Lasagna”
  3. Tomatoes Stuffed With Squid Over Rice With its Ink and Carranza Cheese
  4. Grilled Hake With Potatoes and Iodized Mussel Juice
  5. Pan Roasted Cod With Olive Oil and Olive Oil Cream
  6. Carmelized French Toast With Ice Cream of Fresh Cheese
  7. Slightly Spicy Peach Gnocchi With Coconut Ice Cream and Vanilla Juice

Tomato and watermelon, it turns out, were made for each other. The fish was amazingly prepared. Course number 3 on its own would have been the best dinner I had in years. We drank with it a white, slightly sparkling Basque-country wine called Txomin Extaniz (2009), which itself was a revelation: the apple accent was so distinctive I almost mistook it at first for cider. The total price for two: about $2500.

But when you net out the sunk costs of the round trip airfare to Madrid, train from Madrid to Barcelona, flight and bus from Barcelona to Donostia (San Sebastian) and hotels along the way, what’s left is the paltry 100 euros we paid for the Menu al Degustacion at Bodegón Alejandro in the old city. San Sebastian is a pescatarian’s paradise and this was the third of three outstanding experiences we had here.

We were steered away from a Basque pinxto bar in Barcelona because we were told that we would be getting the real thing in San Sebastian. My advice: have your pinxtos in Barcelona or elsewhere and put SS to it’s best use. It may have an absolute advantage but it’s comparative advantage is the restaurant scene. I hereby rank this the best foodie playground in all of Europe for the astonishing density of incredibly high-quality, moderately priced menus.

It is truly unbelievable how easy it is to walk into any generic restaurant here, without reservations, sit down and have a phenomenal meal. And if you get bored of that it has more than its fair share of Michelin 3-stars too.

Claire Bowern is undertaking a large scale survey of regional dialects within North America in attempt to identify patterns of variation along the lines of this:

But with a larger database and more variables.

Please help out and go here to take a short survey and record your voice.

  1. How will NFL marketing geniuses handle Super Bowl L?
  2. It is easier to hold your breath underwater because there is no instinct to breathe.
  3. By revealed preference, everyone is better off when goods are priced $99.99
  4. The new iPhone 4 gyroscope will make this possible.
  5. If you find something you have written sitting around at someone else’s house, pick it up and read it.  Your state of mind will be such that you see your writing through their eyes.
  6. Which words are uncapitalized in blog titles?
  1. Jamiroquai, where is he (sic) now.
  2. Mickey Mouse’s adventures with amphetamines.
  3. X-ray pinup calendar.
  4. Presidential profanity.
  5. How to subtly flirt with your best friend.

Here’s what my students said about me, presented in the form of a word cloud:

What are the primary teaching strengths of the instructor?

What are the primary weaknesses of the instructor?

Please summarize your reaction to this course focusing on the aspects that were most important to you.


Here is the advice from Annie Duke, professional poker player and the 2006 Champion of the World Series of Rock, Scissors, Paper:

The other little small piece of advice that I would give you is that people tend to throw rock on their first throw. Throwing paper is usually not a good strategy because they might throw scissors. You should throw rock as well.

The key is, and this is the best piece of advice that I can give you, if you do think that you recognize the pattern from your opponent, it’s good to try to throw a tie as opposed to a win. A tie will very often get you a tie or a win, whereas a win will get you a win or a loss. For example, if you think that someone might throw a rock, it’s good to throw rock back at them. You should be going for ties.

If at first it sounds dumb, think again.  The idea is some combination of pattern learning and level-k thinking:  If she thinks that I think that I have figured out her pattern and it dictates that she will play Rock next, then she expects me to play Paper and so in fact she will play Scissors. That means I should play Rock because either I have correctly guessed her pattern and she will indeed play Rock and I will tie, or she has guessed that I have guessed her pattern and she will play Scissors and I will win.

She is essentially saying that players are good at recognizing patterns and that most players are at most level 2

Research note:  why are we wasting time analyzing penalty kicks?  Can we get data on competitive RoShamBo? While we wait for that here is an exercise for the reader:  find the minimax strategy in this game:

David Byrne, singer of the Talking Heads, solo artist, and blogger, is suing Charlie Crist for the use of the song “Road to Nowhere” in an advertisement for his Florida Senate campaign.  One of the reasons given is interesting.  Because the law requires that permission be granted:

… use of the song and my voice in a campaign ad implies that I, as writer and singer of the song, might have granted Crist permission to use it, and that I therefore endorse him and/or the Republican Party, of which he was a member until very, very recently. The general public might also think I simply license the use of my songs to anyone who will pay the going rate, but that’s not true either, as I have never licensed a song for use in an ad. I do license songs to commercial films and TV shows (if they pay the going rate), and to dance companies and student filmmakers mostly for free. But not to ads.

Note that if there were no requirement to ask for permission then there would be no such inference.  (Not that it would change things in this case because David Byrne is opposed for other reasons as well.)

Ryan Avent’s self-styled populist post takes to task a rich man’s tax-conscious balance sheet dance:

As far as I can tell, this is entirely within the law. But I don’t think it’s improper to declare it obscene. Shameful, even. With a fortune of that size, additional wealth is about little more than score-keeping.

Everyone has this natural response to a rich person desiring to avoid taxes.  We all think like Ryan does:

But let’s be honest for a moment. According to this Bloomberg story, Mr Lampert is worth $3 billion. If he earns just 1% per year on that fortune—and he certainly earns much more—then he takes home $30 million in income. Per year. That’s 600 times the median household income in America. It’s more money than a person can reasonably spend. With that much money you can binge every day, and yet the money will just keep accumulating.

But you don’t have to think much longer than that to see a different side of things.  Since Mr. Rich is beyond the binge-every-day constraint, there are lots of other things he can do with his money besides bingeing.  For example, if you were Mr. Rich you could probably think of a lot of loved ones you would like to make happy by sharing your wealth with them. Or perhaps you understand that money is what determines what gets done in the world and maybe you have very strong feelings about what should get done.

Like maybe you want to be able to donate to artists or schools or libraries.  Maybe you want to help prevent HIV infection. Is it so obvious that a rich man, already beyond bingeing, who wants an extra dollar is being more greedy than a middle-class man who wants to get a dollar closer to the bingeing stage?

Let me be clear that I don’t believe that all of the Mr. Riches are trying to be Bill and Melinda Gates.  But I don’t see how you can conclude just from the fact that someone is rich that they don’t have reasons that we would be completely sympathetic to if we knew them.

And if I were a smart do-gooder who thought that everyone on Wall Street was evil the obvious thing to do would be to start a hedge fund, rip them off, and spend their money to meet my goals.

Ghutrah greeting:  gappy3000.

Everyone is jumping on the bandwagon, including Tyler Cowen, Greg Mankiw, and even Sandeep.  They are all trumpeting this study whose bottom line is that student evaluations of teachers are inversely related to the teacher’s long-run added value.  The conclusion is based on two findings.  First, if my students do unusually well in my class they are likely to do badly in their followup classes.  Second, if my students evaluate me highly it is likely that they did unusually well in my class.

I am not jumping on the bandwagon.  I have read through the paper and while I certainly may have overlooked something (and please correct me if I have) I don’t see any way the authors have ruled out the following equally plausible explanation for the statistical findings.  First, students are targeting a GPA.  If I am an outstanding teacher and they do unusually well in my class they don’t need to spend as much effort in their next class as those who had lousy teachers, did poorly this time around, and have some catching up to do next time.  Second, students recognize when they are being taught by an outstanding teacher and they give him good evaluations.

The authors of the cited study are every time quick to jump to the following conclusion:  older, experienced teachers, and especially those with PhD’s know how to teach “lasting knowledge” whereas younger teachers “teach to the test.”  That’s a hypothesis that sounds just right to all of us older, experienced teachers with PhD’s.  But is it any more plausible than older experienced teachers with tenure don’t care about teaching and as a result their students do poorly?  Not to me.

Dear 310-2 students who will be filling out evaluations this week:  please don’t hold it against me that I am old, experienced, and have a PhD.

  1. Expect a fatwa.
  2. Lesbros.
  3. Once you’ve been an extra in a rap video, what else is left but to run for President?
  4. After watching this you won’t be in the mood for kissing.
  5. How not to tell someone they have food on their face.
  6. Presidential prose:  $#@*!

Compare two studies of a medicine’s effectiveness.  In the first study there was a placebo control group.  Subjects who actually got the medicine believed with 50% probability that they were taking a sugar pill.  In the second study there was no placebo control.  Those who got the medicine knew it.

Those who actually got the medicine had better outcomes when they knew it than when they were unsure.

Our group at Columbia has completed preliminary work involving metaanalyses of randomized controlled trials comparing antidepressant medications to a placebo or active comparator in geriatric outpatients with Major Depressive Disorder (Sneed et al. 2006). In placebo controlled trials, the medication response rate was 48% and the remission rate 33%, compared to a response rate of 62% and remission rate of 43% in the comparator trials (p < .05). The effect size for the comparison of response rate to medications in the comparator and placebo controlled trials was large (Cohen’s d = 1.2).

The World Cup starts tomorrow and I just filled out my bracket.  In academia Americans are a minority and people are intensely nationalistic.  So the optimal bracket strategy is to have USA advance as far as I can before even I burst out laughing  (it turns out that’s the semi-finals this year) and also give preference for under-represented countries.  Based on a cursory survey of our department’s demographics, the team that maximizes quality per department representative is Spain.  So Spain is my team to win it all this year.

The World Cup is paradoxical because the group stage is exciting and the elimination stage is extremely boring.  There are probably many reasons for this but often people focus on the penalty shootout.  You hear arguments like this.  Playing it safe gives you essentially a coin flip.  And if the other team is playing it safe, taking risks and playing offensively can actually be worse than waiting for the coin flip.

I have heard proposals to hold the penalty shootout before extra time.  The winner of the shootout will be the winner of the match if it remains tied after extra time.  The uncertainty is resolved first, then they play.

The rule would have ambiguous effects on the quality of play.  For sure, the team that won the shootout would play defensively and the disadvantaged team would be forced to play an attacking game.  There would be exactly one team attacking.

But that would be less exciting than a game in which both are attacking so the rule change would be a net improvement only if most extra-time games would otherwise have neither team attacking.

Here is a theoretical analysis of the question by Juan Carillo.  I am not sure I can summarize his conclusions so help would be appreciated.  Here is an empirical analysis.

“If you don’t have something nice to say, don’t say anything at all.”  That is usually bad advice.  Because then when you say nothing at all it is understood that you have only unkind things to say.

If you are trying to maximize pleasantry then your policy should depend on your listener’s preferences.  Based on what you say she is going to revise her beliefs over what you think about her.  What matters is her preferences over these beliefs.

A key fact is that you have only limited control over those beliefs.    Some of the time you will say something kind and some of the time you will say something unkind.  These will move her beliefs up and down but by the law of total probability the average value of her beliefs is equal to her prior.  You control only the variance.

If good feelings help at the margin more than bad feelings hurt then she is effectively risk-loving.  You should go to extremes and maximize variance.  Here the old adage applies:  you should say something nice when you have something nice to say and you should not say anything nice when you don’t.  In terms of her beliefs, it makes no difference whether you say the unkind thing or just keep quiet and allow her to infer it.  But perhaps politeness gets a lexicographic kick here and you should not say anything at all.

(On thing the standard policy ignores is the ambiguity.  Since there are potentially many unkind things you might be witholding, if she is pessimistic you might worry that she will assume the worst.  Then you should consider saying slightly-unkind things in order to prevent the pessimistic inference.  Still there is the danger of unraveling because then when you say nothing at all she will know that what is on your mind is even worse than that.)

If she is risk-averse in beliefs then you want to go to the opposite extreme and never say anything.  She never updates her beliefs.

But prospect theory suggests that her preferences are S-shaped around the prior:  risk-averse on the upside but risk-loving on the downside.  Then often  it is optimal to generate some variance but not to go to extremes.  You do this by dithering.  Your never give outright compliments or insults.  Your statements are always noisy and subject to interpretation.  But the signal to noise ratio is not zero.

A full analysis of this problem would combine the tools of psychological game theory with persuasion mechanisms a’ la Gentzkow and Kamenica.

A primer in the New York Times.

Kit is a freegan. He maintains that our society wastes far too much. Freeganism is a bubbling stew of various ideologies, drawing on elements of communism, radical environmentalism, a zealous do-it-yourself work ethic and an old-fashioned frugality of the sock-darning sort. Freegans are not revolutionaries. Rather, they aim to challenge the status quo by their lifestyle choices. Above all, freegans are dedicated to salvaging what others waste and — when possible — living without the use of currency. “I really dislike spending money,” Kit told me. “It doesn’t feel natural.”

Its kinda like composting as a lifestyle, only with someone else’s waste and instead of making fertilizer you either eat it or live in it.  An entertaining read from start to finish with cameos by roadkill, frozen toilets and even property rights.

Here is a good metaphor for a problem Mother Nature has to solve.  A small child is playing on the equipment at the playground.  The child knows what she is physically capable of but doesn’t know what is safe.  If Nature knew about swings and see-saws and monkey bars she would just encode their riskiness into the genes of the child and let the child do the optimization.

But these things came along much too recently for Nature to know about them. Fortunately Nature knows that whatever is in the child’s world was pretty likely also in the parents’ world and by now the parents have learned what is safe. So Nature can employ the parent as her agent.

But in this family-firm, the child is a specialist too.  For one thing she has up-to-the-minute information about her physical abilities which change too quickly for the parents to keep track of.  But just as importantly the child is the cheapest source of information about what’s in front of her.  Nature could press the parent into service again to investigate the set of possible activities available to the child, but this would be costly to the parent (for whom this carrier of only half of his genes is just one of many priorities) and so would require extra incentives and anyway that information is more directly accessible to the child.

So Nature’s organizational structure utilizes a tidy division of labor.  The child’s job is to identify the feasible set and the parent’s job is to veto all the alternatives that are too dangerous.  One last constraint explains the reckless kid.   The child cannot communicate the feasible set to the parent.  This leads to the third-best solution. The child just picks something nearby, say the rope bridge, and starts climbing on it. The parent is stationed nearby ready to intervene whenever the child’s first choice is too dangerous.

And thus the seeds of much later conflict are sown.

Heather Christle tweeted:

Pacifico beer tastes like it’s mad at me.

On the other hand, Elk Cove 2007 Wilamette Valley Pinot Noir tastes like it’s embarrassed by me.  Almost as if we met once before on chatroullette and sensed immediately that we were bound by some primitive psychic traction and for the briefest instant we realized how all of history had in fact led us to this seemingly random moment, face to digitized face; only to be stopped, not more than an instant later by the simultaneous fear that our common epiphany could not be real but instead just a projection of our own deep sense of unfulfillment which now was out in the open plainly readable on our faces, the shame of which brought an end, by synchronized Nexting, to our only chance at untying life’s eternal knot, and as if now we have bumped into each other again at a party, introduced by mutual friends, and Elk Cove 2007 Wilamette Valley Pinot Noir glanced at its watch and escaped, avoiding eye contact and stammering about late hours and lost sleep.


While we are on the subject, you would be well-advised not to follow me on Twitter. Here is the link not to follow. Here are the kinds of things you are better off avoiding.

  1. Mike Tyson has a relationship with cannoli.
  2. Creative Uses of the BP logo.
  3. FMRI in 1000 words.
  4. Frank Sinatra tells George Michael to chill out.
  5. How to tell someone he has B.O.

The big news is that AT&T will be discontinuing its unlimited use data plans effective next week which happens to coincide with Steve Jobs worst-kept-secret announcement of the next-generation iPhone.  People are up in arms.

Unlimited, all-you-can-eat wireless data was a beautiful thing for Apple devices on AT&T, delivering streams of Pandora, YouTube videos, a million tweets, and hundreds of webpages without worry. And now it’s dead.

AT&T’s new, completely restructured mobile data plans for both iPhones and iPads have officially launched the era of pay-per-byte data, which we’ve known was coming. We just hoped it would take a little longer. It’s the anti-Christmas.

One thing to keep in mind is that unlimited use tariffs are not part of an efficient or profit-maximizing pricing policy whether you consider monopoly or perfect competition.  It is hard to imagine a model under which unlimited use makes sense unless there is zero marginal cost.  (If marginal cost is positive then under unlimited use your usage will typically go beyond the point where your marginal value exceeds marginal cost. Whatever the market structure, this would be replaced by marginal cost pricing possibly with a reduced fixed fee.)

Still the specific form of the tariff– zero per-MB cost up to some limit and then a steep price after that– annoys many people.  In fact, there are theories that show that this kind of pricing is the best way to exploit consumers who don’t accurately forecast their own usage.

But this brings me to the second thing to keep in mind.  Those exploits take advantage of people who underestimate their usage.  But here is the actual pricing menu.

I bet that you actually overestimate your usage.  I use my phone a lot for browsing the web, maps, etc. and I average under 200 MB per month.  Because some months I do go above 200MB, I will buy the 2GB plan for $25 (I don’t need tethering.)  My wife on the other hand never goes above 200MB.  So the new plan is a better deal for us.

Here’s how to check your usage.

Jonah Lehrer has a post

about why those poor BP engineers should take a break. They should step away from the dry-erase board and go for a walk. They should take a long shower. They should think about anything but the thousands of barrels of toxic black sludge oozing from the pipe.

He weaves together a few stories illustrating why creativity flows best when it is not rushed.  This is something I generally agree with and his post is good read but I think one of his examples needs a second look.

In the early 1960s, Glucksberg gave subjects a standard test of creativity known as the Duncker candle problem. The problem has a simple premise: a subject is given a cardboard box containing a few thumbtacks, a book of matches, and a waxy candle. They are told to determine how to attach the candle to piece of corkboard so that it can burn properly and no wax drips onto the floor.

Oversimplifying a bit, to solve this problem there is one quick-and-dirty method that is likely to fail and then another less-obvious solution that works every time.  (The answer is in Jonah’s post so think first before clicking through.)

Now here is where Glucksberg’s study gets interesting. Some subjects were randomly assigned to a “high drive” group, which was told that those who solved the task in the shortest amount of time would receive $20.

These subjects, it turned out, solved the problem on average 3.5 minutes later than the control subjects who were given no incentives.  This is taken to be an example of the perverse effect of incentives on creative output.

The high drive subjects were playing a game.  This generates different incentives than if the subjects were simply paid for speed.  They are being paid to be faster than the others.  To see the difference, suppose that the obvious solution works with probability p and in that case it takes only 3.5 minutes.  The creative solution always works but it takes 5 minutes to come up with it. If p is small then someone who is just paid for speed will not try the obvious solution because it is very likely to fail.  He would then have to come up with the creative solution and his total time will be 8.5 minutes.

But if he is competing to be the fastest then he is not trying to maximize his expected speed.  As a matter of fact, if he expects everyone else to try the obvious solution and there are N others competing, then the probability is 1 - (1-p)^N that the fastest time will be 3.5 minutes.  This approaches 1 very quickly as N increases.  He will almost certainly lose if he tries to come up with a creative solution.

So it is an equilibrium for everyone to try the quick-and-dirty solution, and when they do so, almost all of them (on average a fraction 1-p of them) will fail and take 3.5 minutes longer than those in the control group.

Consider the game among a couple and their male marriage counselor.  The problem for the marriage counselor is to prove that he is unbiased.  It is common-knowledge at the outset that the wife worries that a male marriage counselor is biased and will always blame the wife.

Indeed if 10 weeks in a row they come in for counseling and talk about the week’s petty argument (how to stack dishes in the dishwasher, whether it matters that the towels are not folded corner-to-corner, etc.) he everytime sides with the husband, eventually the wife will want to find a new counselor.

So what happens after 9 weeks of deciding for the husband?  Now all parties know that the counselor is on his last leg.  He must start siding with the wife in order to keep his job, even if the husband is actually in the right (i.e. even if throwing out the 3-day old soggy quesadilla in the refrigerator was the right thing to do.)  But that means that he’s now biased in favor of the wife and so the husband will fire him.

We have just concluded that if he decides for the husband 9 times in a row he will be fired.  So what happens on week 9 in the rare event that he has decided for the husband 8 times in a row.  Same thing,  he is strategically biased in favor of the wife and he will be fired.

By induction he is biased even on week 1.

(NB: my marriage is beautiful (no counseling) and there is nobody who can fold a towel faster than me.)

I spent one year as an Associate Professor at Boston University.  The doors in the economics building are strange because the key turns in the opposite way you would expect.  Instead of turning the key to the right in order to pull the bolt left-to-right, you turn the key to the left.  For the first month I got it wrong every morning.

Eventually I realized that I needed to do the opposite of my instinct.  And so as I was just about to turn the key to the right I would stop myself and do the opposite.  This worked for about a week.  The problem was that as soon as I started to consistently get it right, it became second nature and then I could no longer tell what my primitive instinct was and what my second-order counter-instinct was.  I would begin to turn the key to the left and then stop myself and turn the key to the right.

I have since concluded that it is basically impossible to “do the opposite” and that we are all lesser beings because of it.  We could learn from experience much faster if we had the ability to remenber what our a) what our natural instinct is b) whether it works and c) to do the opposite when it doesn’t.

We could be George Castanza:

In a famous paper, Mark Walker and John Wooders tested a central hypothesis of game theory using data on serving strategy at Wimbledon.  The probability of winning a point conditional on serving out wide should equal the probability conditional on serving down the middle.  They find support for this in the data.

A second hypothesis doesn’t fare so well. Walker and Wooders suggest that the location of the serve should be statistically independent over time, and this is not borne out in the data.  The reason for the theoretical prediction is straightforward and follows from the usual zero-sum logic.  The server is trying to be unpredictable.  Any serial correlation will allow the returner to improve his prediction where the serve is coming and prepare.

But this assumes there are no payoff spillovers from point to point.  However it’s probably true that having served to the left on the first serve (and say faulted) is effectively “practice” and this makes the server momentarily better than average at serving to the left again.  If this is important in practice, what effect would it have on the time series of serves?

It has two effects.  To understand the effects it is important to remember that optimal play in these zero-sum games is equivalent to choosing a random strategy that makes your opponent indifferent between his two strategies.  For the returner this means randomly favoring the forehand or backhand side in order to equalize the server’s payoffs from the two serving directions.  Since the server now has a boost from serving, say, out wide again, the returner must increase his probability of guessing that direction in order to balance that out. This is a change in the returner’s behavior, but not yet any change in the serving probabilities.

The boost for the server is a temporary disadvantage for the returner.  For example, if he guesses down the line, he is more likely to lose the point now than before.  He may also be more likely to lose the point even if he guesses out wide, but lets say the first outweighs the second.  Then the returner now prefers to guess out wide. The server has to adjust his randomization in order to restore indifference for the returner.  He does this by increasing the probability of serving down the line.

Thus, a first serve fault out wide increases the probability that the next serve is down the line.  In fact, this kind of “excessive negative correlation” is just what Walker and Wooders found.  (Although I am not sure how things break down within-points versus across-points and things are more complicated when we consider ad-court serves to deuce-court serves.)

(lunchtime conversation with NU faculty acknowledged, especially a comment by Alessandro Pavan.)

background here.