You are currently browsing the tag archive for the ‘game theory’ tag.

Here is the advice from Annie Duke, professional poker player and the 2006 Champion of the World Series of Rock, Scissors, Paper:

The other little small piece of advice that I would give you is that people tend to throw rock on their first throw. Throwing paper is usually not a good strategy because they might throw scissors. You should throw rock as well.

The key is, and this is the best piece of advice that I can give you, if you do think that you recognize the pattern from your opponent, it’s good to try to throw a tie as opposed to a win. A tie will very often get you a tie or a win, whereas a win will get you a win or a loss. For example, if you think that someone might throw a rock, it’s good to throw rock back at them. You should be going for ties.

If at first it sounds dumb, think again.  The idea is some combination of pattern learning and level-k thinking:  If she thinks that I think that I have figured out her pattern and it dictates that she will play Rock next, then she expects me to play Paper and so in fact she will play Scissors. That means I should play Rock because either I have correctly guessed her pattern and she will indeed play Rock and I will tie, or she has guessed that I have guessed her pattern and she will play Scissors and I will win.

She is essentially saying that players are good at recognizing patterns and that most players are at most level 2

Research note:  why are we wasting time analyzing penalty kicks?  Can we get data on competitive RoShamBo? While we wait for that here is an exercise for the reader:  find the minimax strategy in this game:

David Byrne, singer of the Talking Heads, solo artist, and blogger, is suing Charlie Crist for the use of the song “Road to Nowhere” in an advertisement for his Florida Senate campaign.  One of the reasons given is interesting.  Because the law requires that permission be granted:

… use of the song and my voice in a campaign ad implies that I, as writer and singer of the song, might have granted Crist permission to use it, and that I therefore endorse him and/or the Republican Party, of which he was a member until very, very recently. The general public might also think I simply license the use of my songs to anyone who will pay the going rate, but that’s not true either, as I have never licensed a song for use in an ad. I do license songs to commercial films and TV shows (if they pay the going rate), and to dance companies and student filmmakers mostly for free. But not to ads.

Note that if there were no requirement to ask for permission then there would be no such inference.  (Not that it would change things in this case because David Byrne is opposed for other reasons as well.)

The World Cup starts tomorrow and I just filled out my bracket.  In academia Americans are a minority and people are intensely nationalistic.  So the optimal bracket strategy is to have USA advance as far as I can before even I burst out laughing  (it turns out that’s the semi-finals this year) and also give preference for under-represented countries.  Based on a cursory survey of our department’s demographics, the team that maximizes quality per department representative is Spain.  So Spain is my team to win it all this year.

The World Cup is paradoxical because the group stage is exciting and the elimination stage is extremely boring.  There are probably many reasons for this but often people focus on the penalty shootout.  You hear arguments like this.  Playing it safe gives you essentially a coin flip.  And if the other team is playing it safe, taking risks and playing offensively can actually be worse than waiting for the coin flip.

I have heard proposals to hold the penalty shootout before extra time.  The winner of the shootout will be the winner of the match if it remains tied after extra time.  The uncertainty is resolved first, then they play.

The rule would have ambiguous effects on the quality of play.  For sure, the team that won the shootout would play defensively and the disadvantaged team would be forced to play an attacking game.  There would be exactly one team attacking.

But that would be less exciting than a game in which both are attacking so the rule change would be a net improvement only if most extra-time games would otherwise have neither team attacking.

Here is a theoretical analysis of the question by Juan Carillo.  I am not sure I can summarize his conclusions so help would be appreciated.  Here is an empirical analysis.

“If you don’t have something nice to say, don’t say anything at all.”  That is usually bad advice.  Because then when you say nothing at all it is understood that you have only unkind things to say.

If you are trying to maximize pleasantry then your policy should depend on your listener’s preferences.  Based on what you say she is going to revise her beliefs over what you think about her.  What matters is her preferences over these beliefs.

A key fact is that you have only limited control over those beliefs.    Some of the time you will say something kind and some of the time you will say something unkind.  These will move her beliefs up and down but by the law of total probability the average value of her beliefs is equal to her prior.  You control only the variance.

If good feelings help at the margin more than bad feelings hurt then she is effectively risk-loving.  You should go to extremes and maximize variance.  Here the old adage applies:  you should say something nice when you have something nice to say and you should not say anything nice when you don’t.  In terms of her beliefs, it makes no difference whether you say the unkind thing or just keep quiet and allow her to infer it.  But perhaps politeness gets a lexicographic kick here and you should not say anything at all.

(On thing the standard policy ignores is the ambiguity.  Since there are potentially many unkind things you might be witholding, if she is pessimistic you might worry that she will assume the worst.  Then you should consider saying slightly-unkind things in order to prevent the pessimistic inference.  Still there is the danger of unraveling because then when you say nothing at all she will know that what is on your mind is even worse than that.)

If she is risk-averse in beliefs then you want to go to the opposite extreme and never say anything.  She never updates her beliefs.

But prospect theory suggests that her preferences are S-shaped around the prior:  risk-averse on the upside but risk-loving on the downside.  Then often  it is optimal to generate some variance but not to go to extremes.  You do this by dithering.  Your never give outright compliments or insults.  Your statements are always noisy and subject to interpretation.  But the signal to noise ratio is not zero.

A full analysis of this problem would combine the tools of psychological game theory with persuasion mechanisms a’ la Gentzkow and Kamenica.

Jonah Lehrer has a post

about why those poor BP engineers should take a break. They should step away from the dry-erase board and go for a walk. They should take a long shower. They should think about anything but the thousands of barrels of toxic black sludge oozing from the pipe.

He weaves together a few stories illustrating why creativity flows best when it is not rushed.  This is something I generally agree with and his post is good read but I think one of his examples needs a second look.

In the early 1960s, Glucksberg gave subjects a standard test of creativity known as the Duncker candle problem. The problem has a simple premise: a subject is given a cardboard box containing a few thumbtacks, a book of matches, and a waxy candle. They are told to determine how to attach the candle to piece of corkboard so that it can burn properly and no wax drips onto the floor.

Oversimplifying a bit, to solve this problem there is one quick-and-dirty method that is likely to fail and then another less-obvious solution that works every time.  (The answer is in Jonah’s post so think first before clicking through.)

Now here is where Glucksberg’s study gets interesting. Some subjects were randomly assigned to a “high drive” group, which was told that those who solved the task in the shortest amount of time would receive $20.

These subjects, it turned out, solved the problem on average 3.5 minutes later than the control subjects who were given no incentives.  This is taken to be an example of the perverse effect of incentives on creative output.

The high drive subjects were playing a game.  This generates different incentives than if the subjects were simply paid for speed.  They are being paid to be faster than the others.  To see the difference, suppose that the obvious solution works with probability p and in that case it takes only 3.5 minutes.  The creative solution always works but it takes 5 minutes to come up with it. If p is small then someone who is just paid for speed will not try the obvious solution because it is very likely to fail.  He would then have to come up with the creative solution and his total time will be 8.5 minutes.

But if he is competing to be the fastest then he is not trying to maximize his expected speed.  As a matter of fact, if he expects everyone else to try the obvious solution and there are N others competing, then the probability is 1 - (1-p)^N that the fastest time will be 3.5 minutes.  This approaches 1 very quickly as N increases.  He will almost certainly lose if he tries to come up with a creative solution.

So it is an equilibrium for everyone to try the quick-and-dirty solution, and when they do so, almost all of them (on average a fraction 1-p of them) will fail and take 3.5 minutes longer than those in the control group.

Consider the game among a couple and their male marriage counselor.  The problem for the marriage counselor is to prove that he is unbiased.  It is common-knowledge at the outset that the wife worries that a male marriage counselor is biased and will always blame the wife.

Indeed if 10 weeks in a row they come in for counseling and talk about the week’s petty argument (how to stack dishes in the dishwasher, whether it matters that the towels are not folded corner-to-corner, etc.) he everytime sides with the husband, eventually the wife will want to find a new counselor.

So what happens after 9 weeks of deciding for the husband?  Now all parties know that the counselor is on his last leg.  He must start siding with the wife in order to keep his job, even if the husband is actually in the right (i.e. even if throwing out the 3-day old soggy quesadilla in the refrigerator was the right thing to do.)  But that means that he’s now biased in favor of the wife and so the husband will fire him.

We have just concluded that if he decides for the husband 9 times in a row he will be fired.  So what happens on week 9 in the rare event that he has decided for the husband 8 times in a row.  Same thing,  he is strategically biased in favor of the wife and he will be fired.

By induction he is biased even on week 1.

(NB: my marriage is beautiful (no counseling) and there is nobody who can fold a towel faster than me.)

I spent one year as an Associate Professor at Boston University.  The doors in the economics building are strange because the key turns in the opposite way you would expect.  Instead of turning the key to the right in order to pull the bolt left-to-right, you turn the key to the left.  For the first month I got it wrong every morning.

Eventually I realized that I needed to do the opposite of my instinct.  And so as I was just about to turn the key to the right I would stop myself and do the opposite.  This worked for about a week.  The problem was that as soon as I started to consistently get it right, it became second nature and then I could no longer tell what my primitive instinct was and what my second-order counter-instinct was.  I would begin to turn the key to the left and then stop myself and turn the key to the right.

I have since concluded that it is basically impossible to “do the opposite” and that we are all lesser beings because of it.  We could learn from experience much faster if we had the ability to remenber what our a) what our natural instinct is b) whether it works and c) to do the opposite when it doesn’t.

We could be George Castanza:

John F Kennedy was born in Brookline and attended Devotion School.  Our kids are attending Devotion this year and our third-grader took part in a lovely event at JFK’s birthplace last week.  There were some nice speeches, including one by the head of the JFK Presidential Library .  It involved this story:

When Jack was quite young but old enough to ride a bike, he played a game of Chicken with his older brother Joe, perhaps on the very street of his birthplace.  In classic fashion, they raced towards each other on their bikes.  Joe expected some respect from his younger brother.  Joe thought Jack would swerve and let him win the game.  No such luck.  They slammed into each other and had to go to hospital.

I had never heard this story before.  I mentioned it to several Americans but they had never heard it either.  Everyone knows the famous Chicken story: Khrushchev vs Kennedy during the Cuban Missile Crisis.

Schelling could always take commonplace strategic interactions and draw fundamental lessons from them.  Similarly, it would be nice to think that JFK’s childhood experience gave him some insight into how to play Chicken when the stakes were high.

In a famous paper, Mark Walker and John Wooders tested a central hypothesis of game theory using data on serving strategy at Wimbledon.  The probability of winning a point conditional on serving out wide should equal the probability conditional on serving down the middle.  They find support for this in the data.

A second hypothesis doesn’t fare so well. Walker and Wooders suggest that the location of the serve should be statistically independent over time, and this is not borne out in the data.  The reason for the theoretical prediction is straightforward and follows from the usual zero-sum logic.  The server is trying to be unpredictable.  Any serial correlation will allow the returner to improve his prediction where the serve is coming and prepare.

But this assumes there are no payoff spillovers from point to point.  However it’s probably true that having served to the left on the first serve (and say faulted) is effectively “practice” and this makes the server momentarily better than average at serving to the left again.  If this is important in practice, what effect would it have on the time series of serves?

It has two effects.  To understand the effects it is important to remember that optimal play in these zero-sum games is equivalent to choosing a random strategy that makes your opponent indifferent between his two strategies.  For the returner this means randomly favoring the forehand or backhand side in order to equalize the server’s payoffs from the two serving directions.  Since the server now has a boost from serving, say, out wide again, the returner must increase his probability of guessing that direction in order to balance that out. This is a change in the returner’s behavior, but not yet any change in the serving probabilities.

The boost for the server is a temporary disadvantage for the returner.  For example, if he guesses down the line, he is more likely to lose the point now than before.  He may also be more likely to lose the point even if he guesses out wide, but lets say the first outweighs the second.  Then the returner now prefers to guess out wide. The server has to adjust his randomization in order to restore indifference for the returner.  He does this by increasing the probability of serving down the line.

Thus, a first serve fault out wide increases the probability that the next serve is down the line.  In fact, this kind of “excessive negative correlation” is just what Walker and Wooders found.  (Although I am not sure how things break down within-points versus across-points and things are more complicated when we consider ad-court serves to deuce-court serves.)

(lunchtime conversation with NU faculty acknowledged, especially a comment by Alessandro Pavan.)

What do you do in the following awkward situation?  your friend receives an invitation to a party.  The host is also your friend but you haven’t received an invitation.

Was the invitation lost in the mail or were you not invited?  You can’t ask the host directly because it would be too uncomfortable if the answer was you weren’t invited.  But in the event that the invitation was lost in the mail it is in all parties’ interest in having that uncertainty resolved.

There would seem no custom that would allow communication of the good news and at the same time avoid communication of the bad news.

But RSVP does exactly that, as long as the custom is to RSVP both acceptances and regrets.  Then if you were invited but you do not RSVP the host will know you didn’t get the invitation, and send a followup.

Game theorists will notice that the bad news can still be inferred.  If the host does not follow up then you learn that you were not invited.  But the beauty if this system is that it is never common knowledge.  The host never knows with certainty that you know about the party you weren’t invited to.  You know about the party but you know that the host does not know that you know, etc… This higher-order uncertainty goes a long way in alleviating the awkwardness.

More generally there is value in social conventions that allow non-public communication: exchange of information, especially bad news, without making that information common knowledge.

Via kottke, an argument against children’s menus in restaurants:

Nicola Marzovilla runs a business, so when a client at his Gramercy Park restaurant, I Trulli, asks for a children’s menu, he does not say what he really thinks. What he says is, “I’m sure we can find something on the menu your child will like.” What he thinks is, “Children’s menus are the death of civilization.”

I would guess that many parents would appreciate the removal of the child’s menus even if they aren’t worried about its implications for the fate of civilization.  At home the kids know what’s in the pantry and if one of the parents is not prepared to make the children starve, they quickly learn to gag and choke on the fava beans to get to the mac-n-cheese (organic!)

If the restaurant has no children’s menu then this strategy is cut from the feasible set.  The parents are effectively committed to make the child starve if she tries it.  With that commitment in place, the child’s best response is to find something on the menu she will like and eat it.

I blogged about this before and in honor of the start of the French Open I gave it some thought again and here are two ideas.

Deuce.  Each game is a race to 4 points.  (And if you are British 4 = 50.) But you have to win by 2.  Conditional on reaching a 3-3 game, the deuce scoring system helps the stronger player by comparison to a flat race to 4.  In fact, if being a stronger player means you have a higher probability of winning each point then any scoring system in which you have to win by n is better for the stonger player than the system where you only have to win by n-1.

You can think about a random walk, starting at zero (deuce) with a larger probability of moving up than down, and consider the event that it reaches n or -n.  The relative likelihood of hitting n before -n is increasing in n.

This is confounded by the fact that the server has an advantage even if he is the weaker player.  But it will average out across service-games.

Grouping scoring into games and sets.  Suppose that being a stronger player means that you are better at winning the crucial points.  Then grouped scoring makes it clear which are the crucial points.  To take an extreme example, suppose that the stronger player has one freebie:  in any match he can pick one point and win that point for sure.

In a flat (ungrouped) scoring system, all points are equal and it doesn’t matter where you spend the freebie.  And it doesn’t change your chance of winning by very much.  But in grouped scoring you can use your freebie at game- or set-point.  And this has a big impact on your winning probability.

Conjecture:  freebies will be optimally used when you are game- or set-point down, not when it is set-point in your favor.  My reasoning is that if you save your freebie when you have set-point, you will still win the set with high probability (especially because of deuce.)  If you switch to using it when you are set-point down, its going to make a difference in the cases when there is a reversal.  Since you are the stronger player and you win each point with higher probability, the reversals in your favor have higher probability.

Any thoughts on the conjecture?  It should have implications for data.  The stronger players do better when they are ad-down then when they have the ad. And across matches, their superiority over weaker players is exaggerated in the ad-down points.

My French Open forecast:  This could be the year when we have a really interesting Federer-Nadal final.

On the way from Brookline to Central Square in Cambridge to go to Toscanini’s, we turned on Hampton St to avoid roadwork and found the Myerson Tooth Corporation:

Next door is the Good News Garage owned by Click and Clack of NPR fame.

Sandeep and I are very close to finishing a first draft of our paper on torture.  As I was working on it today, I came up with a simple three-paragraph summary of the model and some results.  Here it is.

A number of strategic considerations play a central role in shaping the equilibrium. First, the rate at which the agent can be induced to reveal information is limited by the severity of the threat.  If the principal demands too much information in a given period then the agent will prefer to resist and succumb to torture. Second, as soon as the victim reveals that he is informed by yielding to the principal’s demand, he will subsequently be forced to reveal the maximum given the amount of time remaining.  This makes it costly for the victim to concede and makes the alternative of resisting torture more attractive. Thus, in order for the victim to be willing to concede the principal must also torture a resistant suspect, in particular an uninformed suspect, until the very end.  Finally, in order to maintain principal’s incentive to continue torturing a resistant victim  the informed victim must, with positive probability, wait any number of periods before making his first concession.

These features combine to give a sharp characterization of the value of torture and the way in which it unfolds.  Because concessions are gradual and torture cannot stop once it begins, the principal waits until very close to the terminal date before even beginning to torture. Starting much earlier would require torturing an uninformed victim for many periods in return for only a small increase in the amount of information extracted from the informed.  In fact we show that the principal  starts to torture only after the game has reached the ticking time-bomb phase: the point in time after which the deadline becomes a binding constraint on the amount of information the victim can be induced to reveal. This limit on the duration of torture also limits the value of torture for the principal.

Because the principal must be willing to torture in every period, the informed victim concession probability in any given period is bounded, and this also bounds the principal’s payoff.  In fact we obtain a strict upper bound on the principal’s equilibrium payoff by considering an alternative problem in which the victim’s concession probability is maximal subject to this incentive constraint. This bound turns out to be useful for a number of results.   For example the bound enables us to derive an upper bound on the number of periods of torture that is independent of the total amount of information available.  We use this result to show that the value of torture shrinks to zero when the period length, i.e. the time interval between torture decisions, shortens.  In addition it implies that laws preventing indefinite detention of terrorist suspects entail no compromise in terms of the value of information that could be extracted in the intervening time.

Elena Kagan is 50 years old which is not much younger than the average age of newly appointed justices:  53.  That average age upon entry has been relatively constant over time but with life expectancies steadily increasing, the average tenure on the court has increased from 15 to 25 years before and after 1970.

We could argue about the socially efficient entry age and tenure length but its more fun to think about strategy.  As a President from The Democratic Party you are today’s player in the infinite-horizon alternating-move SCOTUS appointment game. It is essentially a game of tug-of-war:  they will appoint conservatives to balance out the liberals that you will appoint in order to balance out their conservatives…

The younger your appointee the longer she will sit on the court.  On the plus side this means she is less likely to die or retire early.  On the down side you will have to live longer with a Justice whose views are harder to discern and are more likely to change.

Tradeoff?  Less than it appears.  It boils down to a comparison of two probabilities:  the probability that the older Justice will step down in a year when the Republicans control the White House versus the probability that the younger Justice will switch teams.  Unless there is a lot of uncertainty about the younger Justice, the second probability is smaller and you should appoint her.

How young should you go?  As you consider younger and younger nominees the mid-tenure defection eventually becomes the dominant concern.  The probability that a non-defector can retire under a Democrat administration reaches its maximum but the uncertainty surrounding a younger Justice steadily increases.

Jonathan Weinstein is blogging now at The Leisure of the Theory Class.  His first post is a nice one on a common fallacy in basketball strategy.

if a player has a dangerous number of fouls, the coach will voluntarily bench him for part of the game, to lessen the chance of fouling out.  Coaches seem to roughly use the rule of thumb that a player with n fouls should sit until n/6 of the game has passed.  Allowing a player to play with 3 fouls in the first half is a particular taboo.  On rare occasions when this taboo is broken, the announcers will invariably say something like, “They’re taking a big risk here; you really don’t want him to get his 4th.”

The fallacy is that in trying to avoid the mere risk of losing minutes from fouling out the common strategy loses minutes for sure by benching him.

Jonathan discusses a couple of caveats in his post and here is another one.  The best players rise to the occasion and overcome deficits as necessary.  But they need to know how much of a deficit to overcome.

Suppose you know that a player will foul out in 1 minute.  There are 5 minutes to go in the game.  If you keep him in the game now he will have to guess how many points the opponents will score in the last 4 and try to beat that.  This entails risk because the opponents might do better than expected.

If you bench him until there is 1 minute left then all the uncertainty is resolved by the time he comes back.  Now he knows what needs to be done and he does it.

If Jonathan’s argument were correct then there would be no such thing as a “closer” in baseball.  At any moment in the game you would field your most effective pitcher and remove him when he is tired.  Instead there are pitchers who specialize in pitching the final innings of the game.

The role of a closer is indeed misunderstood in conventional accounts.  Just as in Jonathan’s argument there is no reason to prefer having your best pitcher on the mound in later innings, other things equal.  All innings are the same.  But this doesn’t mean you shouldn’t save your best pitcher for the end of the game.

Suppose he can pitch for only one inning. If you use him in the 8th inning the opposition might win with a big 9th inning and then you’ve wasted your best pitcher.  It would have been better to let them score their runs in the 8th.  That way you know the game is lost before you have committed your best pitcher. You can save him for the next game.

Here is a wide-ranging article about proposals to utilize placebos as medicine.

But according to advocates, there’s enough data for doctors to start thinking of the placebo effect not as the opposite of medicine, but as a tool they can use in an evidence-based, conscientious manner. Broadly speaking, it seems sensible to make every effort to enlist the body’s own ability to heal itself–which is what, at bottom, placebos seem to do. And as researchers examine it more closely, the placebo is having another effect as well: it is revealing a great deal about the subtle and unexpected influences that medical care, as opposed to the medicine itself, has on patients.

The article never mentions it so I wonder if any consideration has been given to the equilibrium effects.  Presumably the placebo effect requires the patient to believe that the drug is real. Then widespread use of true placebos will dilute the placebo effect.  Since real drugs also contribute a placebo effect on top of any pharmacological effects, the placebo component of existing drugs will be reduced.

Does the benefit of using placebos outweigh the cost of reducing the effectiveness of non-placebos?  If there is a complementarity between the placebo effect and real pharmacological effects it could be that zero is the optimal ratio of placebo to non-placebo treatments.

Note to my behavioral economics class:  this is a good example of a topic that would require the tools of psychological game theory due to the direct payoff consequences of beliefs.

Neither the Labour Party nor the Conservative Party has won an absolute majority in the British elections.  Each can try to rule as a minority government. This means roughly that each policy proposal would be voted on in an ad hoc fashion.  If a key vote fails to win majority support, the minority government would fall and there would be another round of jostling for position.  An alternative is to form a coalition with another party to form a government with majority support.  This would mean the large party in the coalition would have to compromise on its ideal policy positions.

Both Labour and the Conservatives need the Liberal Democrats if they are to go the latter route.  The Liberal Democrats suffer under the British electoral system where power is related to seats won in Parliament not total vote won across districts.  Hence, they support “proportional representation”.  Can the Liberal Democrats play the two parties off against each other to win this prize?

The difficulty for the Liberal Democrats is that the other two parties are in an asymmetric situation.  The Conservatives are in better shape for running a minority government than Labour because they won more seats in Parliament.  They are willing to offer less than Labour.  Labour is willing to offer more but even the total number of seats held by the Liberal Democrats and Labour is not enough to form a majority coalition government. Plus it would involve a deal with a party mired in scandal and win a dark, brooding unpopular leader who refuses to step aside.  Neither option looks good.

Hence, the real issue is the next election which may happen in days not years.  The Liberal Democrats had great hopes of breaking out of their third party status and replacing the Labour party as the alternative to the Conservatives.  It seems that in the end, voters were too worried about putting their faith in an unknown unknown.  To break out of this hole,  the Liberal Democrats have to look statesmanlike and work in the national interest not party interest.  If neither party offers them a solid commitment to electoral reform, the Liberal Democrats should stay out of any coalition and maximize influence and publicity in Parliament.  They can support sensible common values policy proposals put forward by the minority government and build themselves up in the eyes of the electorate.  Only if they win significantly more seats in the next election will the Liberal Democrats get electoral reform

Hertz is making an offer for Dollar-Thrifty.  Consolidation of this sort helps all players in the industry by reducing capacity and allowing all firms, including those outside the merger, to raise prices.  (I already talked about this in a post about the United-Continental-US Airways merger dance.)  There is an incentive then to stay outside the merger and gain from it.  There has to be a countervailing force to overcome the positive externality of a merger.  In the rental car case, it seems Dollar has access to a leisure-traveller market that Hertz would like to get their hands on.   And there is an interesting twist to the merger deal they signed with Dollar.  The Avis CEO would like to bid for Dollar (or so he says) and writes to Dollar:

“[W]e are astonished that.. you have compounded these shortcomings by agreeing to aggressive lock-up provisions, such as unlimited recurring matching rights plus an unusually high break-up fee (more than 5.25% of the true transaction value, as described by your own financial advisor), as a deterrent to competing bids that could only serve to increase the value being offered to your shareholders.”

Hertz has built in a nifty-seeming “match the competition” clause into its agreement with Dollar,  If other bidders emerge, Hertz gets to match their bids and there is a break-up fee that deters Dollar from accepting another suitor.

There are several strategic effects.  If Avis truly wants the Dollar leisure market access, this clause clearly makes it hard for them to acquire it.  But it leaves Hertz vulnerable to a spoiling strategy by Avis:  Avis can start bidding up the price Hertz pays for Dollar by make high bids for Dollar.  Avis won’t win Dollar but will leave Hertz stuck with a big payment.

Spoiling may backfire if its triggers a future price war if Hertz is forced to take a short-run perspective and slash prices to survive .  We will see what happens in the next few days.

A water pipe to the Greater Boston area has broken.  Two million residents have to boil water before they drink it.  We were moving apartments so we were a bit slow off the mark.  By the time I got to Walgreens this morning, all the water was sold out.  Even the San Pellegrino at Whole Foods was gone.  The water shortage has all the features of a classic bank run.

Of course everyone needs more bottled water than they usually buy.  Who knows when the pipe will be fixed?  So, everyone buys extra water for insurance.  But then, this increases an individual’s incentive to buy lots of water yet further as there is greater risk of having no water.  This is like a classic bank run: the more others’ withdraw money, the more I withdraw money as there may be nothing left for me to withdraw later.  Lo and behold bottled water is all gone within hours, just like all the deposits in bank facing a run.

Luckily, there was no beer run.  So, I’m safe.

What is the point of a big speech outlining your intentions when everybody already knows that when push comes to shove you are just going to do what’s in your interest?  Usually such a speech is all about the reasons for your stated intentions.  If you can change people’s minds about the facts then you can change their minds about your intentions.

But the public facts are already that, public.  There’s no changing minds about those.  At best you can change minds about how you perceive the public facts or about facts that only you know.  But here we are in the realm of private, unverifiable information and any speech about that is pure cheap talk.  You will invent facts to support whatever intentions you would like people to believe.

Except for two wrinkles.

  1. Making up a coherent set of facts that support your case and survive scrutiny is not easy.  On the other hand, the truth is always a coherent set of facts.
  2. You can only say things that you can think of.  That’s a small subset of the set of all things that could possibly be true and the truth is always in that subset.

Together these imply that cheap talk always reveals information.  It reveals that the story you are telling is one of the few coherent stories you could think of.  And if that story is complicated enough it becomes more and more likely that this is the only story that complicated that a) is coherent and b) you could think of.  Since the truth always satisfies a) and b), this makes it ever more likely that what you are saying is the truth.

This is why when we want to change minds we make elaborate speeches full of detail.  It convinces the listener that we are telling the truth.  And this is why when we want to be inscrutable the listener will pepper us with questions in order to require so much detail that only the truth will work.

Threeway merger that is.  Or more accurately, how does a two firm merger depend on the possibility that one of the firms can merge with a third if their deal falls through?

This is the key issue in the potential United-Continental merger.  The deal has stalled because they cannot agree on the price.   Things were going well just after Continental learned that United was in merger talks with U.S. Airways.  Talks between United and U.S. Airways have collapsed because United started talking to Continental.  And as the United-U.S. Airways talks have collapsed, so have the Continental-United talks.

A man who has many girlfriends must find it hard to keep all their names straight.  I have a similar issue with this threeway merger post.  Back to the economics which I am also having a hard time keeping straight but here goes.

If two firms merge, the third firm standing outside gains:

The merged firms cut capacity and raise prices.  This is the main incentive to merge in the first place.  In the airline industry with its overcapacity and low profits,  consolidation and merger is a key strategy to regain profitability.  No wonder these firms flirt with each other periodically.  But if the merged firms consolidate, everyone else in the industry gains as prices go up.  The firms in merger talks do not take this positive externality into account in their flirtation.  They merge less than is ideal from the perspective of industry profitability. They date but don’t commit and the industry stays too large and unprofitable.

This analysis is consistent with the U.S. Airways strategy.  In a letter to employees, the C.E.O. says:

Whether we participate in a merger or not, consolidation will create a more efficient domestic industry that can better withstand economic volatility, global competition and the cyclical nature of our industry as a whole. As I have said many times, it is not necessary for us to be direct participants in a merger because the entire industry benefits when consolidation occurs.

But the same logic should apply to Continental: if the other two firms merge, Continental gains.  So, why did they go back to the negotiating table when they learned the other two firms were in merger talks?  There has to be some negative externality to Continental caused by a United-U.S. Airways merger.  Continental and United coördinate heavily even now – they are both in the Star Alliance, their flights link up etc. (I just flew to Newark on some joint Continental-United flight).  Antitrust authorities are going to take another look at the Continental-United relationship if the merger with U.S. Airways goes through.  A U.S. Airways merger can cause the Continental-United marriage to collapse.  So Continental has the incentive to work even harder at the marriage.

But of this was true before, it is still true now: if Continental and United can’t agree on a price, United can always go back to U.S. Airways.  This should lend some urgency to the merger talks.  To make a United-Continental merger more likely, the U.S. Airways C.E.O. should go back to talks with United.   The arrival of the ex-girlfriend can make the new girlfriend nervous and willing to commit.

People have analyzed strategic thinking long before the academic field of game theory started in the 1950s.  I argue that Jane Austen’s six novels, among the most widely beloved in the English language, can be understood as a systematic analysis of strategic thinking.  Austen’s novels do not simply provide interesting “case material” for the game theorist to analyze, but are themselves very ambitious and wide-ranging theoretically, providing insights not yet superseded by modern social science.

That is the abstract of a talk that Michael Chwe will give at UCLA on April 23.  Unfortunately for those of us who can’t attend, there doesn’t seem to be a paper available.  But Michael Chwe is an extremely creative and broad-minded theorist so you can bet that it’s going to be good.  And if we can’t read his thoughts on Jane Austen, there’s always Michael’s paper “Why Were the Workers Whipped?  Pain in a Principal-Agent Model.”

I am starting a new club.  Charter membership is hereby bestowed upon everyone who would never be in a club that would have them as a member.  You may quit for $100.

(By the way, I asked around nobody wants you in the club consisting of the complement of my club.)

Is it a superstition that babies born in a Year of the Dragon will have good luck?  The Taiwanese government wanted to dispell the superstition.

The demographic spike in 1976 was sufficiently large that governments decided to issue warnings in 1987 against having babies in Dragon years because of the problems they caused for the educational system, particularly with respect to finding teachers and classroom space. Editorials were issued that claimed no special luck or intelligence for Dragon babies and a government program in Taiwan was designed to alert parents to the special problems faced by children born in an unusually large cohort (Goodkind, 1991, p. 677 cites multiple newspaper accounts of this).

But the effort failed and another spike was seen in 1988.  Why?  Because the dragon superstition is true. In this paper by Johnson and Nye, among Asian immigrants to the US, those born in Dragon years are compared to those born in non-Dragon years.  Dragon babies are more successful as measured in terms of educational attainment.  And the difference is larger than the corresponding difference for other US residents.

And of course it turns out that this is due to the self-fulfilling nature of the superstition.  Asian Dragon babies have parents who are more successful and they are more likely to have altered their fertility timing in order to have a baby in a Dragon year.  Is this because the smarter parents were more likely to be dumb enough to believe the superstition?

Or is it because of statistical discrimination?  Since the Dragon superstition is true, being a Dragon is a signal of talent and luck.  Unless these traits are observable without error, even unlucky and untalented Dragons will be treated preferentially relative to unlucky and untalented non-Dragons.  Smart parents know this and wait until Dragon years.

Thanks to Toomas Hinnosaar for the pointer.

This weekend we attended a charity auction for my kids’ pre-school.  What does a game theorist think about at a charity auction?

  1. There is a “silent auction” (sealed bid), followed by a live auction (open outcry).  How do you decide which items to put in the live auction?
  2. The silent auction is anonymous, so items with high signaling value should be moved to the live auction.  A 1 week vacation in Colorado sold for less than $1000 (who would want to signal that they don’t already have their own summer home?) wheras a day of working as an assistant at Charlie Trotter’s sold for $2500.
  3. There is a raffle.  You sell those tickets at the door when people are distracted and haven’t started counting how much they have spent yet.  But what price do you set?
  4. The economics of the charity auction are such that vendors with high P-MC markups can donate a high value item (high P) for a low cost (low MC).  This explains why the items usually have a boutique quality to them.
  5. In the silent auction, you write down your bids with a supplied pen on the bid sheet.  Sniping is pervasive.  Note for next year:  bring a cigarette lighter.  You make your last minute bids and then melt the end of the pen just enough to stop the ink from flowing.
  6. When you are in suburban Winnetka on Chicago’s North Shore, for which kind of item is the winner’s curse the strongest: art or sports tickets/memorabilia?
  7. One of the live auction side-events is a pure signaling game where you are asked to give an amount of money to a special fund.  They start with a very high request and after everyone who is willing to give that much has raised their hand, they continually lower the request.  I think this is the right timing.  With the ascending version the really big donors will give too early.
  8. How do you respond when asked to pay to enter a game with the rules to be announced later?  Answer:  treat it like a raffle.  Surprise answer:  A chicken will be placed in a cage.  The winner of the game is the player whose number the chicken poops on.

That didn’t turn out to be such a good idea.  Someone forgot to put a lid on the cage and the chicken, well-versed in the hold-up problem, found a way to use his monopoly power:

The game of chicken

That is an actual-use, signed and engraved hockey stick from Patrick Kane of the Chicago Blackhawks.  It subsequently sold for over $1000.  The chicken was unharmed and eventually spent the evening perched on a rafter high above the proceedings threatening to select a winner directly.

About a year ago I posted a link to a YouTube video of the Golden Balls “Split or Steal” game, hailing it as a godsend for teachers of game theory and the Prisoners’ Dilemma.  That video has made its way around the web in the year since and I sat down to prepare my introductory game theory lecture yesterday looking for something new.

Well, it turns out that now there are many, many new videos of Split or Steal on YouTube and you can spend hours watching these.  Here is my favorite and the one I used in class today.

I also heard from Seamus Coffey who has analyzed the data from Split or Steal games and finds:

  • Women are more cooperative than men, non-whites more than whites, the old more cooperative than the young.
  • There is more cooperation between opposite-sex players than when the players are of the same sex.
  • The young don’t cooperate with the old, and the old discriminate even more against the young.
  • Blonde women cooperate a lot.  Men cooperate less with blondes than with brunettes.

Here is a link to a paper by John List who looks at similar patterns in the game Friend or Foe.

When you are competing to be the dominant platform, compatibility is an important strategic variable.  Generally if you are the upstart you want your platform to be compatible with the established one.  This lowers users’ costs of trying yours out.  Then of course when you become established, you want to keep your platform incompatible with any upstart.

Apple made a bold move last week in its bid to solidify the iPhone/iPad as the platform for mobile applications.  Apple sneaked into its iPhone OS Developer’s agreement a new rule which will keep any apps out of its App Store that were developed using cross-platform tools. That is, if you write an application in Adobe’s Flash (the dominant web-based application platform) and produce an iPhone version of that app using Adobe’s portability tools, the iPhone platform is closed to you.  Instead you must develop your app natively using Apple’s software development tools.  This self-imposed-incompatibility shows that Apple believes that the iPhone will be the dominant platform and developers will prefer to invest in specializing in the iPhone rather than be left out in the cold.

Many commentators, while observing its double-edged nature, nevertheless conclude that on net this will be good for end users.  Jon Gruber writes

Cross-platform software toolkits have never — ever — produced top-notch native apps for Apple platforms…

[P]erhaps iPhone users will be missing out on good apps that would have been released if not for this rule, but won’t now. I don’t think iPhone OS users are going to miss the sort of apps these cross-platform toolkits produce, though.  My opinion is that iPhone users will be well-served by this rule. The App Store is not lacking for quantity of titles.

And Steve Jobs concurs.

We’ve been there before, and intermediate layers between the platform and the developer ultimately produces sub-standard apps and hinders the progress of the platform.

Think about it this way.  Suppose you are writing an app for your own use and, all things considered, you find it most convenient to write in a portable framework and export a version for your iPhone.  That option has just been taken away from you.  (By the way, this thought experiment is not so hypothetical.  Did you know that you must ask Apple for permission to distribute to yourself software that you wrote?) You will respond in one of two ways.  Either you will incur the additional cost and write it using native Apple tools, or you will just give up.

There is no doubt that you will be happier ex post with the final product if you choose the former.  But you could have done that voluntarily before and so you are certainly worse off on net.  Now the “market” as a whole is just you divided into your two separate parts, developer and user.  Ex post all parties will be happy with the apps they get, but this gain is necessarily outweighed by the loss from the apps they don’t get.

Is there any good argument why this should not be considered anti-competitive?

Obama’s Nuclear Posture Review has been revealed.  The main changes:

(1) We promise not to use nuclear weapons on nations that are in conflict with the U.S. even if they use biological and chemical weapons against us;

(2) Nuclear response is on the table against countries that are nuclear, in violation of the N.P.T., or are trying to acquire nuclear weapons.

This is an attempt to use a carrot and stick strategy to incentivize countries not to pursue nuclear weapons.  But is it any different from the old strategy of “ambiguity” where all options are left on the table and nothing is clarified?  Elementary game theory suggests the answer is “No”.

First, the Nuclear Posture Review is “Cheap Talk”, the game theoretic interpretation of the name of our blog.  We can always ignore the stated policy, go nuclear on nuclear states or non-nuclear on nuclear states – whatever is optimal at the time of decision.  Plenty of people within the government and outside it are going to push the optimal policy so it’s going to be hard to resist it. Then, the words of the review are just that – words.  Contracts we write for private exchange are enforced by the legal system.  For example a carrot and stick contract between an employer and employee, rewarding the employee for high output and punishing him for low output, cannot be violated without legal consequences.  But there is no world government to enforce the Nuclear Posture Review so it is Cheap Talk.

If our targets know our preferences, they can forecast our actions whatever we say or do not say, so-called backward induction.  So, there is no difference between the ambiguous regime and the clear regime.

What if our targets do not know our preferences?  Do they learn anything about our preferences by the posture we have adopted? Perhaps they learn we are “nice guys”?  But even bad guys have an incentive to pretend they are nice guys before they get you.  Hitler hid his ambitions behind the facade of friendliness while he advanced his agenda.  So, whether you are a good guy or bad guy, you are going to send the same message, the message that minimizes the probability that your opponent is aggressive.  This is a more sophisticated version of backward induction. So, your target is not going to believe your silver-tongued oratory.

We are left with the conclusion that a game theoretic analysis of the Nuclear Posture Review says it seems little different from the old policy of ambiguity.

It’s as if someone at the New York Times scanned this blog, profiled me, and assembled an article that hits every one of my little fleemies:

(Follow closely now; this is about the science of English.) Phoebe and Rachel plot to play a joke on Monica and Chandler after they learn the two are secretly dating. The couple discover the prank and try to turn the tables, but Phoebe realizes this turnabout and once again tries to outwit them.

As Phoebe tells Rachel, “They don’t know that we know they know we know.”

Literature leverages our theory of mind.

Humans can comfortably keep track of three different mental states at a time, Ms. Zunshine said. For example, the proposition “Peter said that Paul believed that Mary liked chocolate” is not too hard to follow. Add a fourth level, though, and it’s suddenly more difficult. And experiments have shown that at the fifth level understanding drops off by 60 percent, Ms. Zunshine said. Modernist authors like Virginia Woolf are especially challenging because she asks readers to keep up with six different mental states, or what the scholars call levels of intentionality.

And they even drag evolution into it.

To Mr. Flesch fictional accounts help explain how altruism evolved despite our selfish genes. Fictional heroes are what he calls “altruistic punishers,” people who right wrongs even if they personally have nothing to gain. “To give us an incentive to monitor and ensure cooperation, nature endows us with a pleasing sense of outrage” at cheaters, and delight when they are punished, Mr. Flesch argues. We enjoy fiction because it is teeming with altruistic punishers: Odysseus, Don Quixote, Hamlet, Hercule Poirot.

Cordobés address:  Marcin Peski.