You are currently browsing the tag archive for the ‘game theory’ tag.

Someone asks you a question and you have an intuitive understanding of precisely what is being asked. If you are not a game theorist you stop there and answer.

If you are a game theorist you start to analyze the question and discover that, as with all language there is some ambiguity. There’s more than one way to answer the question, the answer could be very detailed or just straightforward, the question might actually be rhetorical, there may be some implicit message to you in the question.

You begin to analyze how else she might have posed the same question. The fact that she chose this particular wording over another gives you clues about what precisely she is getting at. By a process of elimination this leads you to refine your interpretation of the question.

But if you are just a mediocre game theorist its pretty likely your analysis is totally wrong and you are worse off than if you hadn’t ever thought to analyze it. Indeed there is a good reason that your intuitive interpretation was the right one. Because the language evolved that way. And the evolution was probably so complex that there is no way a mediocre game theorist could have traced through the path of evolution to deduce that interpretation.

This is like how drugs can be found from compounds that have evolved in the plant and animal kingdom despite the fact that science has no way of knowing how to synthesize those.

And of course pretty much all of us are mediocre game theorists at best.

How much do your eyes betray you?

Have two subjects play matching pennies.  They will face each other but separated by a one-way mirror.  Only one subject will be able to see the other’s face.  He can only see the face, not anything below the chin.

Each subject selects his action by touching a screen.  Touch the screen to the West to play Heads, touch the screen on the East to play Tails.  (East-West rather than left-right so that my Tails screen is on the same side as your Tails screen.  This makes it easier to keep track.)

You have to touch a lighted region of the screen in order to have your move registered and the lighted region is moving around the screen.  This is going to require you to look at the screen you want to touch.  But you can look in one direction and then the other and touch only the screen you want.  Your hands are not visible to the other subject.

How much more money is earned by the player who can see the other’s eyes?

Now do the same with Monkeys.

(Conversation with Adriana Lleras-Muney)

Eye color and cuckoldry:

The human eye color blue reflects a simple, predictable, and reliable genetic mechanism of inheritance. Blue-eyed individuals represent a unique condition, as in their case there is always direct concordance between the genotype and phenotype. On the other hand, heterozygous brown-eyed individuals carry an allele that is not concor- dant with the observed eye color. Hence, eye color can provide a highly visible and salient cue to the child’s heredity. If men choose women with characteristics that promote the assurance of paternity, then blue-eyed men should prefer and feel more attracted towards women with blue eyes.

This calls for an experiment.

The eye color in the photographs of each model was manipulated so that a same face would be shown with either the natural eye color (e.g., blue) or with the other color (e.g., brown). Both blue-eyed and brown-eyed female participants showed no difference in their attractiveness ratings for male models of either eye color. Similarly, brown-eyed men showed no preference for either blue-eyed or brown-eyed female models. However, blue-eyed men rated as more attractive the blue-eyed women than the brown-eyed ones. We interpret the latter preference in terms of specific mate selective choice of blue-eyed men, reflecting strategies for reducing paternity uncertainty.

Everybody is reacting to the Golden Balls video that I and others have posted. They are saying that the Split or Steal game has been solved.  I am not so sure.

  1. First of all I would like to point out that this solution was suggested here in the comments the first time I (or anybody else I believe) linked to Golden Balls in April 2009.  Florian Herold and Mike Hunter wrote:   “Perhaps a better strategy would be to tell your opponent that you are going to pick steal no matter what, and then offer to split the money after the show. Pointing out that your offer constitutes a legally binding oral contract, which has been taped, and viewed by hundreds of thousands of witnesses.  That way your opponent can opt to pick split, and half the money with you. Or defect in which case you both get nothing.”
  2. Also, Greg Taylor has a good analysis in the comments to Friday’s post.
  3. But the successful application of the idea in the most recent video ironically shows the flaw in their reasoning.  Consider the player who receives the proposal and is suggested to play split.  This is the player on the left in the video.  He should ask himself whether he believes that the proposing player will actually play Steal.  Florian, Mike, and the rest of the Internet make the observation that Steal is a dominant strategy and therefore a promise to play Steal is credible.  But Steal is a dominant strategy for a player with the standard payoffs and the guy who makes this proposal has revealed that he does not have the standard payoffs.
  4. Now you may respond by saying that the proposal to play (Split, Steal) and divide the winnings at the end is in fact a selfish proposal as it avoids the inevitable (Steal, Steal) outcome. So, you say that the proposer is in fact confirming that he has the standard payoffs and therefore that Steal is a dominant strategy and his promise is credible.
  5. But let’s look more closely. If he intends to carry out his proposal then he expects to end up with half of the winnings. Indeed he expects to have the full check given to him and either because of altruism, fairness, or reputational incentives to prefer to hand over half of it to the opponent. As he sits there with the balls in his hand and the expectation of this eventual outcome, he can’t avoid concluding that the cheapest way to bring about that outcome is to instead just play Split right now and allow the producers of the show to enforce the agreement.
  6. Given this the player who is considering this proposal should not believe it.  He should believe that the proposer is too nice to carry out his nice proposal.  A selfish player faced with this proposal should play Steal because he should expect the proposer to play Split.
  7. Having dispensed with this try, my personal favorite solution is the one proposed by Evan and elaborated by Emil in which the two men commit to randomize by picking each others’ balls.
  8. In any case, this video is an essential companion to the original for any undergraduate game theory course.
  9. Finally, does this Golden Balls show actually exist?  In the present?  How long ago did this happen?  Or is this just some kind of Truman Show like experiment you are all subjecting me to?

The Golden Balls strategy we have been waiting for.

Boonie bobble:  Emil Temnyalov

I am catching up on my Mad Men viewing after a spring break trip abroad. I watched three episodes in one sitting last night. In Episode 3, copywriter Peggy interviews candidates for an open position. She likes the work of Michael Ginsburg whose portfolio is labelled “judge not, lest you be judged”.  Her co-worker agrees with Peggy’s assessment of Ginsburg’s work but advises her not to hire him because, if Ginsburg turns out to be a better copywrite than Peggy, she risks losing her job to him. Later in the episode (or was in the next?), Pete humiliates Roger, taking credit for winning an account for the advertising company. Roger storms out. He says he was good to Pete when he was young, recruited him and look how he is lording it over Roger now. A portend of Peggy’s future?

Recruiting and peer review are plagued with incentive problems in the presence of career concerns. If you recruit somebody good, you risk the chance that they replace you later on. You have an incentive to select bad candidates. You have an incentive to denigrate other people’s good work (the NIH syndrome) for even deliberately promote their bad work in the hope that it fails dramatically and this allows you to leap over them in some career race. The solution in academia is tenure. If you have a job for life, you can feel free to hire great candidates. (Various psychological phenomena such as insecurity compromise this solution of course!) Peggy does not have tenure and even Roger who is a partner faces the ignominy of playing second fiddle to a young upstart. Watch out Peggy!

Suppose you and I are playing a series of squash matches and we are playing best 2 out of 3.  If I win the first match I have an advantage for two reasons.  First is the obvious direct reason that I am only one match short of wrapping up the series while you need to win the next two.  Second is the more subtle strategic reason, the discouragement effect.  If I fight hard to win the next match my reward is that my job is done for the day, I can rest and of course bask in the glow of victory.  As for you, your effort to win the second match is rewarded by even more hard work to do in the third match.

Because you are behind, you have less incentive than me to win the second match and so you are not going to fight as hard to win it.  This is the discouragement effect.  Many people are skeptical that it has any measurable effect on real competition.  Well I found a new paper that demonstrates an interesting new empirical implication that could be used to test it.

Go back to our squash match and now lets suppose instead that it’s a team competition.  We have three players on our teams and we will match them up according to strength and play a best two out of three team competition.  Same competition as before but now each subsequent game is played by a different pair of players.

A new paper by Fu, Lu, and Pan called “Team Contests With Multiple Pairwise Battles” analyzes this kind of competition and shows that they exhibit no discouragement effect.  The intuition is straightforward:  if I win the second match, the additional effort that would have to be spent to win the third match will be spent not by me, but by my teammate.  I internalize the benefits of winning because it increases the chance that my team wins the overall series but I do not internalize the costs of my teammate’s effort in the third match.  This negative externality is actually good for team incentives.

The implied empirical prediction is the following.  Comparing individual matches versus team matches, the probability of a comeback victory conditional on losing the first match will be larger in the team competition.  A second prediction is about the very first match.  Without the discouragement effect, the benefit from winning the first match is smaller.  So there will be less effort in the first match in the team versus individual competition.

My son and I went to see the Cubs last week as we do every Spring.

The Cubs won 8-0 and Matt Garza was one out away from throwing a complete game shutout, a rarity for a Cub.  The crowd was on its feet with full count to the would-be final batter who rolled the ball back to the mound for Garza to scoop up and throw him out.  We were all ready to give a big congratulatory cheer and then this happened.  This is a guy who was throwing flawless pitches to the plate for nine innings and here with all the pressure gone and an easy lob to first he made what could be the worst throw in the history of baseball and then headed for the showers.  Cubs win!

But this Spring we weren’t so interested in the baseball out on the field as we were in the strategery down in the toilet. Remember a while back when I wrote about the urinal game? It seems like it was just last week  (fuzzy vertical lines pixellating then unpixellating the screen to reveal the flashback:)

Consider a wall lined with 5 urinals. The subgame perfect equilibrium has the first gentleman take urinal 2 and the second caballero take urinal 5.  These strategies are pre-emptive moves that induce subsequent monsieurs to opt for a stall instead out of privacy concerns.  Thus urinals 1, 3, and 4 go unused.

So naturally we turn our attention to The Trough.

A continuous action space.  Will the trough induce a more efficient outcome in equilibrium than the fixed array of separate urinals?  This is what you come Cheap Talk to find out.

Let’s maintain the same basic parameters. Assume that the distance between the center of two adjacent urinals is d and let’s consider a trough of length 5d, i.e. the same length as a 5 side-by-side urinals (now with invincible pink mystery ice located invitingly at positions d/2 + kd for k = 1, 2, 3, 4.) The assumption in the original problem was that a gentleman pees if and only if there is nobody in a urinal adjacent to him. We need to parametrize that assumption for the continuos trough. It means that there is a constant r such that he refuses to pee in a spot in which someone is currently peeing less than a distance r from him.  The assumption from before implies that d < r < 2d.  Moreover the greater the distance to the nearest reliever the better.

The first thing to notice is that the equilibrium spacing from the original urinal game is no longer a subgame-perfect equilibrium. In our continuous trough model that spacing corresponds to gentlemen 1 and 2 locating themselves at positions d/2 and 7d/2 measured from the left boundary of the trough.  Suppose r <= 3d/2. Then the third man can now utilize the convex action space and locate himself at position 2d where he will be a comfortable distance 3d/2>= r away from the other two. If instead r > 3d/2, then the third man is strictly deterred from intervening but this means that gentleman number 2 would increase his personal space by locating slightly farther to the right whilst still maintaining that deterrence.

So what does happen in equilibrium? I’ve got good news and bad news. The good news first. Suppose that r < 5d/4. Then in equilibrium 3 guys use the trough whereas only 2 of the arrayed urinals were used in the original equilibrium. In equilibrium the first guy parks at d/2 (to be consistent with the original setup we assume that he cannot squeeze himself any closer than that to the left edge of the trough without risking a splash on the shoes) the second guy at 9d/2 and the third guy right in the middle at 5d/2. They are a distance of 2d> r from one another, and there is no room for anybody else because anybody who came next would have to be standing at most a distance d< r from two of the incumbents. This is a subgame perfect equilibrium because the second guy knows that the third guy will pick the midpoint and so to keep a maximal distance he should move to the right edge. And foreseeing all of this the first guy moves to the left edge.

Note well that this is not a Pareto improvement. The increased usage is offset by reduced privacy.They are only 2d away from each other whereas the two urinal users were 3d away from each other.

Now the bad news when r >5d/4.  In this case it is possible for the first two to keep the third out.  For example suppose that 1 is at 5d/4  and 2 is at 15d/4.  Then there is no place the third guy can stand and be more than 5d/4 away hence more than r from the others.  In this case the equilibrium has the first two guys positioning themselves with a distance between them equal to exactly 2r, thus maximizing their privacy subject to the constraint that the third guy is deterred.  (One such equilibrium is for the first two to be an equal distance from their respective edges, but there are other equilibria.)

The really bad news is that when r is not too large, the two guys even have less privacy than with the urinals. For example if r is just above 5d/4 then they are only 10d/4 away from each other which is less than the 3d distance from before.  What’s happening is that the continuous trough gives more flexibility for the third guy to squeeze between so the first two must stand closer to one another to keep him away.

Instant honors thesis for any NU undergrad who can generalize the analysis to a trough of arbitrary length.

Reality shows eliminate contestants one at a time. Shows like American Idol do this by holding a vote. The audience is asked to vote for their favorite contestant and the one with the fewest votes is eliminated.

Last week on American Idol something very surprising happened. The two singers who were considered to have given the best performances the night before, and who were strong favorites to win the whole thing received among the fewest votes. Indeed a very strong favorite, Jessica Sanchez was “voted off” and only survived because the judges kept her alive by using their one intervention of the season.

The problem in a nutshell is that American Idol voters are deciding whom to eliminate but instead of directly voting for the one they want to eliminate, they are asked to vote for the person they don’t want eliminated.  This creates highly problematic strategic incentives which can easily lead to a strong favorite being eliminated.

For example suppose that a large number prefers contestant S to all others. But while they agree on S, they disagree about the ranking of the other contestants and they are interested in keeping their second and third favorites around too.

The supporters of S have a problem:  maintaining support for S is a public good which can be undermined by their private incentives.  In particular some of them might be worried that their second favorite contestant needs help. If so, and if they think that S has enough support from others, then they will switch their vote from S to help save that contestant. But if they fail to coordinate, and too many of the S supporters do this, then S is in danger of being eliminated.

This problem simply could not arise if American Idol instead asked audiences to vote out the contestant they want to see eliminated. Consider again the situation described above.  Yes there will still be incentives to vote strategically, indeed any voting system will give rise to some kind of manipulation.  But a strong favorite like S will be insulated from their effects. Here’s why.  An honest voter votes for the contestants she likes least.  A strategic voter might vote instead for her next-to-least favorite.  She might do this if she thinks that voting out her least-favorite is a wasted vote because not enough other people will vote similarly.  And she might do this if she thinks that one of her favorite contestants is a risk for elimination.

But no matter how she tries to manipulate the vote it will be shifting votes around for her lower-ranked contestants without undermining support for her favorite. Indeed it is a dominated strategy to vote against your favorite and so a heavily favored contestant like S could never be eliminated in a voting-out system as it can with the current voting-in system.

In most of the US there is “no-fault” divorce.  Either party can petition for divorce without having to demonstrate to the court any reason to legitimize the petition. The divorce is usually granted even if the other party wants to remain married.

In England, you must prove to the judge that there is valid reason for the divorce, even if both parties want to separate. This is particularly problematic when only one party wants to separate but doesn’t have a valid reason for it. Then they must make the marriage sufficiently unpleasant for the spouse so that the spouse will a) want a divorce and b) have a verifiable good reason for it.  For example:

One petition read: “The respondent insisted that his pet tarantula, Timmy, slept in a glass case next to the matrimonial bed,” even though his wife requested “that Timmy sleep elsewhere.”

 or

The woman who sued for divorce because her husband insisted she dress in a Klingon costume and speak to him in Klingon. The man who declared that his wife had maliciously and repeatedly served him his least favorite dish, tuna casserole.

and most egregious of all

“The respondent husband repeatedly took charge of the remote television controller, endlessly flicking through channels and failing to stop at any channel requested by the petitioner,” one petition read.

Those examples and more here.  Gat get Markus Mobius.

Bicycle “sprints.”  This is worth 6 minutes of your time.

Thanks to Josh Knox for the link.

If you give them the chance, Northwestern PhD students will take a perfectly good game and turn it into a mad science experiment.  First there was auction scrabble, now from the mind of Scott Ogawa we have the pari-mutuel NCAA bracket pool.

Here’s how it worked.  Every game in the bracket was worth 1000 points. Those 1000 points will be shared among all of the participants who picked the winner of that game.  These scores are added up for the entire bracket to determine the final standings.  The winner is the person with the most points and he takes all the money wagered.

Intrigued, I entered the pool and submitted a bracket which picked every single underdog in every single game.  Just to make a point.

Here’s the point.  No matter how you score your NCAA pool you are going to create a game with the following property:  assuming symmetric information and a large enough market, in equilibrium every possible bet will give exactly the same expected payoff.  In other words an absurd bet like all underdogs will win is going to do just as well as any other, less absurd bet.

This is easy to see in simple example, like a horse race where pari-mutuel betting is most commonly used.  Suppose A wins with twice the probability that B wins. This will attract bets on A until the number of bettors sharing in the purse when A wins is so large that B begins to be an attractive bet. In equilibrium there will be twice as much money in total bet on A as on B, equalizing the expected payoff from the two bets. One thing to keep in mind here is that the market must be large enough for these odds to equilibrate. (Without enough bettors the payoff on A may not be driven low enough to make B a viable bet.)

It’s a little more complicated though with a full 64 team tournament bracket. Because while each individual matchup has a pari-mutuel aspect, there is one key difference.  If you want to have a horse in the second-round race, you need to pick a winner in the first round.  So your incentive to pick a team in the first round must also take this into account.  And indeed, the bet share in a first round game will not exactly offset the odds of winning as it would in a standalone horse race.

On top of that, you aren’t necessarily trying to maximize the expected number points.  You just want to have the most points, and that’s a completely different incentive.  Nevertheless the overall game has the equilibrium property mentioned above.

(Now keep in mind the assumptions of symmetric information and a large market.  These are both likely to be violated in your office pool.  But in Scott’s particular version of the game this only works in favor of betting longshots. First of all the people who enter basketball pools generally believe they have better information than they actually have so favorites are likely to be over-subscribed. Second, the scoring system heavily favors being the only one to pick the winner of a match which is possible in a small market. )

In fact, my bracket, 100% underdogs, Lehigh going all the way, finished just below the median in the pool.  (Admittedly the market wasn’t nearly large enough for me to have been able to count on this.  I benefited from an upset-laden first round.)

Proving that equilibrium of an NCAA bracket pool has this equilibrium property is a great prelim question.

Twitter has finally acknowledged a long-suspected bug that makes users automatically unfollow accounts for no apparent reason, and now that it’s working on a fix, many would rather keep the bug to cover the awkwardness of manually unfollowing people. Time to admit you’re just sick of your friends’ updates, folks.

Of course, Twitter power users like Reuters’ Anthony De Rosa don’t really want to automatically lose followers, but it’s sort of funny for him to tweet “one benefit of the unfollow bug is it gives me an excuse if someone gets upset i unfollowed them.” De Rosa’s far from the only one. It seems likehundreds reacted with the same sentiment on hearing the news. That’s because it’s true that sometimes you keep following some idiot just because you don’t want the drama of dropping them. Look at how many people publiclycomplain about losing a follower. Well, tweeters, it’s time for us to take responsibility for our actions just a little bit more. Take a cue from The Awl’s Choire Sicha and embrace the hate.

The link came from Courtney Conklin Knapp, who I believe still follows me but I can’t be sure.

On E-book collusion:

Once Apple made it known it would accept agency pricing (but not selling books at a higher price than other retail competitors), the publishing companies didn’t have to act in concert, although one of them had to be willing to bell the very large cat called Amazon by moving to the agency model.

I’ve long had a personal hypothesis — not based on any inside information, but simply my own read on the matter, I should be clear — that the reason it was Macmillan that challenged Amazon on agency pricing was that Macmillan is a privately held company, and thus immune from being punished short-term in the stock market for the action. Once it got Amazon to accept agency pricing, the other publishers logically switched over as well. This doesn’t need active collusion; it does need people paying attention to how the business dominoes could potentially fall.

Again, maybe they all did actively collude, in which case, whoops, guys. Stop being idiots. But if they did not, I suppose the question is: At what point does everyone knowing everyone else’s business, having a good idea how everyone else will act, and then acting on that knowledge, begin to look like collusion (or to the Justice Department’s point, activelybecome collusion)? My answer: Hell if I know, I’m not a lawyer. I do know most of these publishers have a lot of lawyers, however (as does Apple), and I would imagine they have some opinions on this.

John Scalzi is an author, blogger, and apparently a pretty good economist too.  Read the whole thing.

Observers cite the possibility of a brokered convention as the only reason for Newt Gingrich to remain in the race for the Republican nomination. If Mitt Romney cannot accumulate a majority of committed delegates prior to the convention, then Newt’s delegates give him bargaining power, with the possibility of throwing them behind Rick Santorum or even forging a Santorum/Gingrich ticket.

But why wait for the convention? If Gingrich and Santorum can strike a deal why not do it right now? There are tradeoffs.

1. If all primaries awarded delegates in proportion to vote shares there would be no gain to joining forces early. Sending Newt’s share of the primary voters over to Rick gives him the same number of delegates as he would get if Newt collected those delegates himself and then bartered them at the convention. But winner-take-all primaries change the calculation. If Santorum and Gingrich split the conservative vote in a winner-take-all primary, all of those delegates go to Romney. Joining forces now gives the pair a chance of bagging those big delegate payoffs.

2. Teaming up now solves a commitment problem.  If both stay in the race and succeed in bringing about a contested convention, the bargaining will be a three-sided affair with Romney potentially co-opting one of them and leaving the other in the cold.

Those are the incentives in favor of a merger now.  Working against is

3. A candidate has less control over his voters than he would have over his delegates. Newt endorsing Santorum does not guarantee that all of Newt’s supporters will vote for Rick, many will prefer Romney and others would just stay at home on primary day.

Gingrich and Santorum are savvy enough, and there is enough at stake, for us to assume they have done the calculations. Given the widespread belief that any vote for Rick or Newt is a really an anti-Romney vote, they surely have discussed joining forces. But they haven’t done it yet and probably will not, and this tells us something.

The huge gain coming from points 1 and 2 can only be offset by losses coming from point 3. Their inability to strike a deal reveals that the Gingrich and Santorum staffs must have calculated that the anti-Romney theory is an illusion. They must have figured out that if Gingrich drops out of the race what will actually happen is that Romney will attract enough of Gingrich’s supporters (or enough of them will disengage altogether) to earn a majority and head into the convention the presumptive nominee.

Newt and Rick need each other. But what they particularly need is for each to stay in the race until the end, collecting not just the conservative votes but also the anti-other-conservative-candidate vote in hopes that their combined delegate total is large enough come convention-time to finally make a deal.

(Based on a conversation with Nageeb Ali)

When you are selecting seats on a flight and you have an open row should you take the middle seat or the aisle?  Even if you prefer the aisle seat you are tempted to take the middle seat as a strategic move.  People who check in after you will try to find a seat with nobody next to them and if you take the middle seat they will choose a different row.  The risk however is that if the flight is full you are still going to have someone sitting next to you and you will be stuck in the middle seat.

Let’s analyze a simple case to see the tradeoffs.  Suppose that when you are checking in there are two empty rows and the rest of the plane is full.  Let’s see what happens when you take the middle seat.  The next guy who comes is going to pick a seat in the other row.  Your worst fear is that he takes the middle seat just like you did.  Then the next guy who comes along is going to sit next to one of you and the odds are 50-50 its going to be you.  Had you chosen the aisle seat the next guy would take the window seat in your row.

If instead the guy right after you takes a window seat in the other row then your strategy just might pay off.  Because the third guy will also go to the other row, in the aisle seat.  If nobody else checks in you have won the jackpot.  A whole row to yourself.

But this is pretty much the only case in which middle outperforms aisle.  And even in this case the advantage is not so large.  In the same scenario, had you taken the aisle seat, the third guy would be indifferent between the two rows and you’d still have a 50-50 chance of a row to yourself.  Even when he takes your row he’s going to take the window seat and you would still have an empty seat next to you.

Worse, as long as one more person comes you are going to regret taking the middle seat.  Because the other row has only a middle seat left.  The fourth guy to come is going to prefer the window or aisle seat in your row.  Had you been sitting in the aisle seat the first four passengers would go aisle, aisle, window, window and you would be safe.

A question raised over dinner last week. A group of N diners are dining out and the bill is $100. In scenario A, they are splitting the check N ways, with each paying by credit card and separately entering a gratuity for their share of the check. In scenario B, one of them is paying the whole check.

In which case do you think the total gratuity will be larger?  Some thoughts:

  1. Because of selection bias, it’s not enough to cite folk wisdom that tables who split the check tip less (as a percentage):  At tables where one person pays the whole check that person is probably the one with the deepest pockets.  So field data would be comparing the max versus the average.  The right thought experiment is to randomly assign the check.
  2. Scenario B can actually be divided into two subcases.  In Scenario B1, you have a single diner who pays the check (and decides the tip) but collects cash from everyone else.  In Scenario B2 the server divides the bill into N separate checks and hands them to each diner separately.  We can dispense with B1 because the guy paying the bill internalizes only 1/Nth of the cost of the tip so he will clearly tip more than he would in Scenario A.  So we are really interested in B2.
  3. One force favoring larger tips in B2 is the shame of being the lowest tipper at the table.  In both A and B2 a tipper is worried about shame in the eyes of the server but in B2 there are two additional sources.  First, beyond being a low tipper relative to the overall population, having the server know that you are the lowest tipper among your peers is even more shameful.  But even more important is shame in the eyes of your friends.  You are going to have to face them tomorrow and the next day.
  4. On the other hand, B2 introduces a free-rider effect which has an ambiguous impact on the total tip.  The misers are likely to be even more miserly (and feel even less guilty about it) when they know that others are tipping generously.  On the other hand, as long as it is known that there are misers at the table, the generous tippers will react to this by being even more generous to compensate.  The total effect is an increase in the empirical variance of tips, with ambiguous implications for the total.
  5. However I think the most important effect is a scale effect.  People measure how generous they are by the percentage tip they typically leave.  But the cost of being a generous tipper is the absolute level of the tip not the percentage.  When the bill is large its more costly to leave a generous tip in terms of percentage.  So the optimal way to maintain your self-image is to tip a large percentage when the bill is small and a smaller percentage when the bill is large.  This means that tips will be larger in scenario B2.
  6. One thing I haven’t sorted out is what to infer from common restaurant policy of adding a gratuity for large parties.  On the one hand you could say that it is evidence of the scale effect in 5.  The restaurant knows that a large party means a large check and hence lower tip percentage.  However it could also be that the restaurant knows that large parties are more likely to be splitting the check and then the policy would reveal that the restaurant believes that B2 has lower tips.  Does anybody know if restaurants continue to add a default gratuity when the large party asks to have the check split?
  7. The right dataset you want to test this is the following.  You want to track customers who sometimes eat alone and sometimes eat with larger groups.  You want to compare the tip they leave when they eat alone to the tip they leave when part of a group.  The hypothesis implied by 3 and 5 is that their tips will be increasing order in these three cases:  they are paying for the whole group, they are eating alone, they are splitting the check.

(Thanks to those who commented on G+)

Here’s a card game: You lay out the A,2,3 of Spades, Diamonds, Clubs in random order on the table face up. So that’s 9 cards in total. There are two players and they take turns picking up cards from the table, one at a time. The winner is the first to collect a triplet where a triplet is any one of the following sets of three:

  1. Three cards of the same suit
  2. Three cards of the same value
  3. Ace of Spaces, 2 of Diamonds, 3 of Clubs
  4. Ace of Clubs, 2 of Diamonds, 3 of Spades

Got it?  Ok, this game can be solved and the solution is that with best play the result is a draw, neither player can collect a triplet.  See if you can figure out why. (Drew Fudenberg got it almost immediately [spoiler.]) Answer and more discussion are after the jump.

Read the rest of this entry »

Models of costly voting give rise to strategic turnout:  in a district in which party A has a big advantage, supporters of party A will have low turnout in equilibrium in order to make the election close.  That’s because only when the election is close will voters have an incentive to turnout and vote, which is costly.

Looking at elections data it is hard to identify strategic turnout. Low turnout is perfectly consistent with non-strategic voters who just have high costs of voting.

Redistricting offers an interesting source of variation that could help. Suppose that a state has just undergone redistricting and a town has been moved from a district with a large majority for one party into a more competitive district. Non-strategic voters in that town will not change their behavior.

But strategic voters will have different incentives in the new district. In particular we should see an increase in turnout among voters in the town that is new to the district. And this increase in turnout should be larger than any change in turnout observed for voters who remained in the district before and after redistricting.

There are probably a slew of testable implications that could be derived from models of strategic turnout based on whether the new district is more or less competitive than the old one, whether the stronger party is the same or different from the stronger party in the old district, and whether the town leans toward or against the stronger party in the new district.

Consider a Man and a Woman. Time flows continuously and the horizon is infinite. At time T=0 they are locked in an embrace, and every instant of time t>0 their lips draw closer. Let \delta_t be the distance at time t, it declines monotonically over time.  At each t, the two simultaneously choose actions a^i_t which jointly determine the speed at which they close the space that separates them, governed by the rule

\frac{d \delta_t}{d t} = - f(a^M_t, a^W_t)

where f is strictly increasing in both arguments. In addition, both the Man and the Woman can pull away at any moment by choosing action a_0, thereby spurning the kiss and ending the game.

The closer they get the clearer they can see into one another’s eyes, revealing to each of them the true depth of their love, captured by the state of the world \theta which they receive private, and increasingly precise signals about as the game unfolds.

In this game, the lovers have common interests. Each wants to kiss if and only if their love is true, i.e. \theta >0.  However, they know the risks of opening their heart to another:  neither wants to be the one left unrequited. When \theta > 0, each prefers kissing to breaking the embrace, but each prefers to pull away first if they expect the other to pull away.

Along the equilibrium path their lips move fleetingly close. At close proximity every tiny fluctuation in the speed of approach communicates to the other changes in the private estimates \hat \theta_i each lover i is updating continuously over time, i.e.  a^i_t varies monotonically with the estimate \hat \theta_i.

But then: does he see doubt in her eyes? Did she blink? He cannot be sure. A bad signal, a discrete drop in his estimate and this causes him to hesitate.  And since \theta is a common state of the world, his hesitation is informative for her and so she pauses too. Not just because his hesitation raises doubts that their love is everlasting, but worse:  he may be preparing to turn away.  She must prepare herself too.

But she doesn’t. She sees deeper than that and instead she lurches ahead ever so slightly. He is looking into her eyes:  he can see that she believes with all her heart that \theta is positive. And now he knows that these are her true beliefs because if in truth her estimate of \theta was close to the negative region, his hesitation would have pushed her over and she would have turned away pre-emptively. Instead her persistence implores him to have faith in their love and to stay there in her arms with his lips so tantalizingly close to hers.

His doubts are vanquished. He loves her. She knows that he knows that she loves him too. And at last it is common knowledge that their love is true and they will kiss and in their moment of deepest passion they discover something about their payoff functions they haven’t before. This moment is the first moment in the rest of their lives together. They will not rush. Time is standing still now. Together, as if coordinated by the eternal spirit of amor, they allow a_t^i to fall gradually to zero, just slow enough that their lips finally meet, but just fast enough that, when they do,

\frac{d \delta_t}{d t} \rightarrow 0

so that their convergence occurs smoothly but still in finite time.

Happy Anniversary Jennie

(drawing:  Chemistry from www.f1me.net)

Great prelim question:

 To provide some interpretation, consider a set of equidistant urinals in a washroom and men who enter the room sequentially. Men dislike to choose a urinal next to another urinal which is already in use. If no urinal providing at least basic privacy is available, each man prefers to leave the room immediately. Each man prefers larger distances to the next man compared to smaller distances. The men enter the bathroom one by one in rapid succession, so men will only consider the privacy they have after no further men decides to use a urinal (e.g., the privacy the first man enjoys before the second man enters is too short to influence the first man’s utility).

One of the paper’s main results is that maximizing throughput (Beavis!) of a washroom may, paradoxically, entail restricting total capacity.  Consider a wall lined with 5 urinals.  The subgame perfect equilibrium has the first gentleman take urinal 2 and the second caballero take urinal 5.  These strategies are pre-emptive moves that induce subsequent monsieurs to opt for a stall instead out of privacy concerns.  Thus urinals 1, 3, and 4 go unused.  If instead urinals 2 and 4 are replaced with decorative foliage, and assuming that gentleman #1 is above relieving himself into same, then the new subgame perfect equilibrium has him taking urinal 1, and urinals 3 and 5 hosting the subsequently arriving blokes.  See the example on page 11.

Free cowboy hat tip:  Josh Gans

Let’s join Harvard Sports Analysis for the post-mortem:

But no one knew that his score would decide the game. Before he ran the ball in, the Giants had 0.94 win probability (per Advanced NFL Stats). After the play, the Giants’ win probability dropped to 0.85. Had he instead taken a Brian Westbrook or Maurice Jones-Drew-esque knee on the goal line, the Giants would have had a 0.96 win probability. Assuming the Patriots used their final time out, the Giants would have had 3rd and Goal from the 1-yard line with around 1:04 left to play. At this point, the Giants could either attempt to score a touchdown or take a knee. Assuming the touchdown try was unsuccessful or that Eli Manning kneeled, the Giants could have let the clock run all the way down to 0:25 before using the Giants’ final time out. With 4th and Goal from the 2 with 25 seconds left to play, the Giants would have a 0.92 win probability, 0.07 higher than after Bradshaw scored the touchdown of his life.

I am not sure about all this though.  Shouldn’t Bradshaw have just stood there on the 1 (far away enough that he can’t be pushed in) and then cross over at the last second?

Candidates S and R are competing in the opposing party’s primary, and your candidate awaits the winner in the general election. Your candidate beats S in the general election with probability s and beats R in the general election with probability r<s.  You would like S to win the primary since s>r. But S is currently the underdog, he beats R in the primary with only probability p. Should you spend money to help S?

Every percentage point you can add to S’s chance of winning the primary increases your candidate’s odds in the general election by s-r < 1.

(you win the general election with total probability ps + (1-p) r. an increase in p by one unit increases this probability by s-r.)

If you save your money for the general election, every percentage point you add to your own chance of winning raises your own chance of winning by, well, 1 percentage point.

  1. The same analysis goes through with any number of candidates in the primary.  So you can add G and P and it won’t change anything.
  2. This is about the marginal value of influencing a 1 percent change in the election probabilities.  That value is larger in the general election.  But there may be differences in the marginal cost of influencing a primary versus the general.
  3. In particular, if the primary is a three-candidate race there may be a lumpy return on your investment.  For example, if you increase s by a little bit that could cause G to drop out of the race and then hope that a big chunk of his probability of winning goes into increasing p.
  4. However, with G currently at about 3% probability at Intrade, at most you can get 3 times s-r.  For this to outweigh the 1 you get from the general it must be that s-r > 33%

Start with a world without rhetorical questions. All questions are interpreted as being genuinely inquisitive. You are considering doing X and someone comes up to you and says “Why on Earth would you want to do X?”

Now two things happen. First, since all questions are genuinely inquisitive, you take his question literally and you start thinking of an answer. Why indeed do you want to do X?  The second thing that happens is that, again because the question is genuine, you learn that it’s not obvious to your inquisitor that X is the right thing to do.

That’s compelling information if you happened to be wondering whether X is in fact the right thing to do.  And no matter how successful you are at coming up with an answer as to why on Earth you want to do X, that information will make you at least slightly less sanguine about doing X.

And that is why you can’t have a world without rhetorical questions.  Because in a world without rhetorical questions, questions are effective rhetoric.  Indeed a world without rhetorical questions maximizes the rhetorical value of a question.

In a world where all questions are interpreted as genuine queries, someone who is not genuinely inquisitive but in fact has an agenda most effectively erodes your confidence in X by saying “Why on Earth would you want to do X?”  And so questions spontaneously become rhetorical devices.

As these devices are used more and more, the questions are taken less and less literally.  In equilibrium the incentive to convert rhetorical arguments into questions continues right up until the point where questions have no rhetorical value over and above just saying outright “X sucks.”

That doesn’t mean that rhetorical questions die away.  They must continue to be used just frequently enough so that their value is just degraded enough so that nobody has any (strict) incentive to use them any more than that.

(Drawing:  Something About Relationships from www.f1me.net)

Jonah Lehrer didn’t:

In many situations, such reinforcement learning is an essential strategy, allowing people to optimize behavior to fit a constantly changing situation. However, the Israeli scientists discovered that it was a terrible approach in basketball, as learning and performance are “anticorrelated.” In other words, players who have just made a three-point shot are much more likely to take another one, but much less likely to make it:

What is the effect of the change in behaviour on players’ performance? Intuitively, increasing the frequency of attempting a 3pt after made 3pts and decreasing it after missed 3pts makes sense if a made/missed 3pts predicted a higher/lower 3pt percentage on the next 3pt attempt. Surprizingly [sic], our data show that the opposite is true. The 3pt percentage immediately after a made 3pt was 6% lower than after a missed 3pt. Moreover, the difference between 3pt percentages following a streak of made 3pts and a streak of missed 3pts increased with the length of the streak. These results indicate that the outcomes of consecutive 3pts are anticorrelated.

This anticorrelation works in both directions. as players who missed a previous three-pointer were more likely to score on their next attempt. A brick was a blessing in disguise.

The underlying study, showing a “failure of reinforcement learning” is here.

Suppose you just hit a 3-pointer and now you are holding the ball on the next possession. You are an experienced player (they used NBA data), so you know if you are truly on a hot streak or if that last make was just a fluke. The defense doesn’t. What the defense does know is that you just made that last 3-pointer and therefore you are more likely to be on a hot streak and hence more likely than average to make the next 3-pointer if you take it. Likewise, if you had just missed the last one, you are less likely to be on a hot streak, but again only you would know for sure. Even when you are feeling it you might still miss a few.

That means that the defense guards against the three-pointer more when you just made one than when you didn’t. Now, back to you. You are only going to shoot the three pointer again if you are really feeling it. That’s correlated with the success of your last shot, but not perfectly. Thus, the data will show the autocorrelation in your 3-point shooting.

Furthermore, when the defense is defending the three-pointer you are less likely to make it, other things equal. Since the defense is correlated with your last shot, your likelihood of making the 3-pointer is also correlated with your last shot. But inversely this time:  if you made the last shot the defense is more aggressive so conditional on truly being on a hot streak and therefore taking the next shot, you are less likely to make it.

(Let me make the comparison perfectly clear:  you take the next shot if you know you are hot, but the defense defends it only if you made the last shot.  So conditional on taking the next shot you are more likely to make it when the defense is not guarding against it, i.e. when you missed the last one.)

You shoot more often and miss more often conditional on a previous make. Your private information about your make probability coupled with the strategic behavior of the defense removes the paradox. It’s not possible to “arbitrage” away this wedge because whether or not you are “feeling it” is exogenous.

Defamation is the making of a false statement that creates a negative image of another person.  At a superficial level the point of anti-defamation laws are to prevent such false statements.  But false statements by themselves are not damaging unless they do harm to the subject’s reputation.  For that, the statement must be credible.

If the direct effect of an anti-defamation law is to reduce the number of false statements made, an indirect effect is to enhance the credibility of all of the false statements that continue to be made.  Because a member of the public who cannot assess the veracity of a given statement will begin with the presumption that the statement is more likely to be true since a larger fraction of all statements made are true.  This of course encourages more false statements, undermining the original direct effect of the law.

Indeed it is impossible to eliminate false damaging statements without making them even more damaging.

Nevertheless, in equilibrium the net effect of an anti-defamation law is to increase the truthfulness of public discourse.  The marginal slanderous statement is the one which is just damaging enough to compensate for the expected cost of a lawsuit.  When that cost is higher, the previously marginal statement is crowded out.

But that just says that the proportion of statements that are false goes down.  Another effect anti-defmation laws are to reduce the number of truthful statements.  Even a truthful statement has a chance of being judged false and damaging.  There will overall be fewer things said.

Furthermore, since a defamatory statement must be proven to be false and some falsehoods are easier to demonstrate than others, the incidence of anti-defamation laws on various types of lies must be considered.   A libelous claim will be made if and only if the cost of the potential lawsuit is outweighed by the value of making it.  For statements whose explicit intention is to defame, that value increases as the overall credibility of public discourse increases.  Among those statements, the ones that are hardest to prove false will actually be said more and more often.

In fact as long as the speaker is creative enough to think of a variety of different ways to defame, the main effect of anti-defamation laws will be to substitute away from verifiable lies in favor of statements which are more difficult to prove false.  This will be so as long as a sufficiently large segment of the public cannot tell the difference between statements that can be verified and statements that cannot.

I write all the time about strategic behavior in athletic competitions.  A racer who is behind can be expected to ease off and conserve on effort since effort is less likely to pay off at the margin.  Hence so will the racer who is ahead, etc.  There is evidence that professional golfers exhibit such strategic behavior, this is the Tiger Woods effect.

We may wonder whether other animals are as strategically sophisticated as we are.  There have been experiments in which monkeys play simple games of strategy against one another, but since we are not even sure humans can figure those out, that doesn’t seem to be the best place to start looking.

I would like to compare how humans and other animals behave in a pure physical contest like a race.  Suppose the animals are conditioned to believe that they will get a reward if and only if they win a race.  Will they run at maximum speed throughout regardless of their position along the way?  Of course “maximum speed” is hard to define, but a simple test is whether the animal’s speed at a given point in the race is independent of whether they are ahead or behind and by how much.

And if the animals learn that one of them is especially fast, do they ease off when racing against her?  Do the animals exhibit a tiger Woods effect?

There are of course horse-racing data.  That’s not ideal because the jockey is human.  Still there’s something we can learn from horse racing.  The jockey does not internalize 100% of the cost of the horse’s effort.  Thus there should be less strategic behavior in horse racing than in races between humans or between jockey-less animals.  Dog racing?  Does that actually exist?

And what if a dog races against a human, what happens then?

Alex Madrigal endorses this game:

“Don’t Be A Di*k During Meals With Friends.”

The first person to crack and look at their phone picks up the check.

Our (initial) purpose of the game was to get everyone off the phones free from twitter/fb/texting and to encourage conversations.

Rules:

1) The game starts after everyone has ordered.

2) Everybody places their phone on the table face down.

3) The first person to flip over their phone loses the game.

4) Loser of the game pays for the bill.

5) If the bill comes before anyone has flipped over their phone everybody is declared a winner and pays for their own meal.

The problem with this implementation is that once one person cracks, she’s paying for the meal regardless of what happens next and so all the incentive power is gone.  It’ll be a twitter/fb/texting free-for-all.

A more sophisticated approach is to make the last person who uses their phone pay for the meal.  Its subtle:  no matter how many people have used their phone already, everybody else has maximal incentives not to be the next one.  Because by backward induction they will be the last one and instead of a free meal they will pay for everyone.

But even that has its problems because the first guy has no incentives left.  So you could do something like this:  At the beginning everyone is paying their own meal.  The first one to use their phone has to pay also for the meal of one other person.  The next person who uses their phone, including possibly if the first guy does it again, has to pay for all the meals that the previous guy had to pay for plus one more.

It would take a lot of mistakes to run out of incentives with that scheme but even if you do you can start paying for the people in the table next to you.

Barretina bump:  Courtney Conklin Knapp

Someone you know is making a scene on a plane. They don’t see you. Yet.  As of now they think they are making a scene only in front of total strangers who they will never see again.  It might be awkward if they knew you were a witness.  Should you avert your eyes in hopes they won’t see you seeing them?

If they are really making a scene it is highly unlikely that you didn’t notice. So if eventually he does see you and sees that you are looking the other way he is still going to know that you saw him. So in fact it’s not really possible to pretend.

Moreover if he sees that you were trying to pretend then he will infer that you think that he was behaving inappropriately and that is why you averted your eyes. Given that he’s going to know you saw him you’d rather him think that you think that he was in fact in the right.  Then there will be no awkwardness afterward.

However, there is the flip side to consider.  If you do make eye contact there will be higher order knowledge that you saw him. How he feels about that depends on whether he thinks his behavior is inappropriate.  If he does then he’s going to assume you do too.  Once you realize you can’t avoid leaving the impression that you knew he was behaving inappropriately, and the unavoidable mutual knowledge of that fact, the best you can do is avoid the higher-order knowledge by looking the other way.

So it all boils down to a simple rule of thumb: If you think that he knows he is behaving inappropriately then you should look away. You are going to create discomfort either way, but less if you minimize the higher-orders of knowledge. But if you think that he thinks that in fact he has good reason to be making a scene then, even if you know better and see that he is actually way out of line, you must make eye contact to avoid him inferring that you are being judgmental.

Unless you can’t fake it.  But whatever you do, don’t blog about it.

Suppose our minds have a hot state and a cool state.  In the cool state we are rational and make calculated tradeoffs between immediate rewards and payoffs that require investment of time and effort.  But when the hot state takes over we abandon deliberation and just react on instinct.

The hot state is there because there are circumstances where the stakes are too high and our calculations too slow or imperfect.  You are being attacked, the food in front of you smells funky, that bridge looks unstable.  No matter how confident your cool head might be, the hot state grabs the wheel and forces you to do the safe thing.

Suppose all of that is true.  What does that mean when a situation looks borderline and you see that instincts haven’t taken over?  Your cool, calculating head rationally infers that this must be a safer situation than it would otherwise appear.  And you are therefore inclined to take more risks.

But then the hot state better step in on those borderline situations to stop you from taking those excessive risks.  Except that now the borderline has moved a little bit toward the safe end.  Now when the hot state doesn’t take over it means its even more safe, etc.

And of course there is the mirror image of this problem where the hot state takes over to make sure you take an urgent risk.  A potential mate is in front of you but the encounter has questionable implications for the future.  Physical attraction receives a multiplier.  If it is not overwhelming then all of the warning signs are magnified.