You are currently browsing the tag archive for the ‘game theory’ tag.

Ariel Rubinstein wrote the Afterword for the 2007 reprinting of the book that launched Game Theory as a field, von Neumann and Morgenstern’s Theory of Games and Economic Behavior. Here is a representative excerpt:

Others (including myself) think that the object of game theory is primarily to study the considerations used in decision making in interactive situations.  It identifies patterns of reasoning and investigates their implications on decision making in strategic situations.  Accordingto this opinion, game theory does not have normative implications and its empirical significance is very limited.  Game theory is viewed as a cousin of logic.  Logic does not allow us to screen out true statements from false ones and does not help us distinguish right from wrong.  Game theory does not tell us which action is preferable or predict what other people will do.  If game theory is nevertheless useful or practical, it is only indirectly so.   In any case, the burden of proof is on those who use game theory to make policy recommendations, not on those who doubt the practical value of game theory in the first place.

And, by the way, I sometimes wonder why people are so obsessed in looking for “usefulness” in economics generally and game theory in particular.  Should academic research be judged by its usefulness?

Tam o’Shanter Toss:  Russ Roberts

Poker players know that the eyes never lie.  Indeed your eyes almost always signal your intentions for the simple reason that you have to see what you intend to do.

This is an essential difference between communication with eye movement/eye contact and other forms of communication.  The connection between what you know and what you say is entirely your choice and of course you will always use this freedom to your advantage.  But what you are looking at and where your eyes move are inevitably linked.

Naturally your friends and enemies have learned, indeed evolved to exploit this connection.  Even the tiniest changes in your gaze are detectable.  As an example, think of the strange feeling of having a conversation with someone who has a lazy eye.

Given that Mother Nature reveals such a strong evolutionary advantage for reading another’s gaze the question then arises why we have not evolved to mask it from those who would take advantage?  The answer must be that it would in fact not be to our advantage.

With any form of communication, sometimes you want to be truthful and other times you want to deceive.  The physical link between your attention and your gaze means that, for this particular form of communication you can’t have it both ways.  Outright deception being impossible, at best Nature could hide our gaze altogether, say by uniformly coloring the entire eye.

But she chose not to.  By Nature’s revealed preference, this particular form of honesty is evolutionarily advantageous, at least on average.

For the sake of argument let’s take on the plain utilitarian case for waterboarding: in return for the suffering inflicted upon a single terror suspect we may get information that can save many more people from far greater suffering. At first glance, authorizing waterboarding simply scales up the terms of that tradeoff. The suspect suffers more and therefore he will be inclined to give more information and sooner.

But these higher stakes are not appropriate for every suspect. After all, the utilitarian cost of torture comes in large part from the possibility that this suspect may in fact have no useful information to give, he may even be innocent. When presented with a suspect whose value as an informant is uncertain, these costs are too high to use the waterboard. Something milder is preferred instead like sleep deprivation.

So the utilitarian case for authorizing waterboarding rests on the presumption that it will be held in reserve for those high-value suspects where the trade-off is favorable.

But if we look a little closer we see it’s not that simple. Torture relies on promises and not just threats. A suspect is willing to give information only if he believes that it will end or at least limit the suffering. When we authorize waterboarding, we undermine that promise because our sleep-deprived terror suspect knows that as soon as he confesses, thereby proving that he is in fact an informed terrorist, he changes the utilitarian tradeoff. Now he is exactly the kind of suspect that waterboarding is intendend for. He’s not going to confess because he knows that would make his suffering increase, not decrease.

This is an instance of what is known in the theory of dynamic mechanism design as the ratchet effect.

Taken to its logical conclusion this strategic twist means that the waterboard, once authorized, can’t ever just sit on the shelf waiting to be used on the big fish. It has to be used on every suspect. Because the only way to convince a suspect that resisting will lead to more suffering than the waterboarding he is sure to get once he concedes is to waterboard him from the very beginning.

The formal analysis is in Sandeep’s and my paper, here.

The season finale of Bachelor Pad featured a surprise twist.  The share of the prize money would be decided by a “keep” or “share” Prisoners’ Dilemma-style game.  $250,000 was at stake and the last standing couple, Dave and Natalie, could split the money if they both chose Share.  If one of them chose “Keep” then he/she would take all the money for him/her-self.  If they both chose “Keep” the $250,000 would be shared among all the contestants who were previously eliminated from the show.

What makes this game different from the Golden Balls game is that the decision to Share doubles as a signal to be a faithful partner in their post-show everlasting love.

The clip below is a bit long, but the highlight comes in the middle when the loser bachelor(ettes) give their game theoretic analyses while Dave and Natalie go into separate rooms to prove theorems.

Thanks to Charles Murry for the pointer.

I am partial to RJ’s Black Soft Licorice. In moderation, like cocaine (I imagine!), it does no harm. But like cocaine (I imagine!), one is tempted to consume it in excess, a bag at a time.

How can I just eat a reasonable amount?  I could joint Licorice Lovers Anonymous (LLA) and complete a ten step program to kick my habit entirely. I am sure there are many fellow sufferers out there, who love RJ’s not too little but too much.   Just a few tweets would allow us to coördinate and set up weekly meetings.  Seems like overkill.  And anyway, I don’t want to kick the habit entirely, just control it.

I see my five-year old wandering around, causing trouble and a simple solution appears magically in my mind.  I ask him to hide the licorice.  He is very good at hiding things that do not belong to him – remote controls, his brother’s toys, my watch etc. etc. He’ll love to hide the licorice.  There are a couple of problems.  It is my intention to ask him to bring the licorice back every day so I can have a few pieces.  But there is s significant chance that he’ll forget where he hid the bag.  So what?  Then, we’ll lose the bag and the licorice. But this is not any worse than the LLA solution of cutting out the addiction completely.

There is a second and quite famous problem from incentive theory: Who will monitor the monitor?  In other words, perhaps your police-kid will eat the licorice himself.  For this problem I have an answer.  My five-year old will consume strawberry Twizzlers by the cartful, but black, spicy licorice, I think not.  I am proved right.  One piece of licorice is chewed but the rest are intact.

Thinking about it, I realize that I have used a variation of an old idea of Oliver Wlliamson’s, “Credible Commitments: Using Hostages to Support Exchange” (jstor gated version). In his analysis, a contracting party A voluntarily hands over an “ugly princess” to party B to give party A the incentive to perform some costly investment.  Party B does not value the princess and hands her over once the investment is sunk.  In my argument, the ugly princess is the licorice and instead of specific investment, I want to commit to avoid over-consumption of an addictive good.

This pretty much gives you principles under which this mechanism works: Consumption of good that is addictive for party A but has no value for party B can be controlled by allowing party B to control the use of the good.  Party A might return the favor for party B (e.g. by rationing computer game time).  Only, my party B would never agree to this voluntarily and would see as a violation of civil liberties rather than as a favor.  This level of addiction I have no solution for….

An Israeli leftist believes that right-wing Prime Minister Neyanyahu can bring peace:

“The left wants to make peace but cannot, while the right doesn’t want to but, if forced to, can do it.”

Why can a right-winger make peace, while a left-winger cannot?  There might be many reasons but the one mentioned in this blog must of course draw from Crawford-Sobel’s Strategic Information Transmission which has become the canonical model of the game-theoretic notion Cheap Talk. The key intuition was identified by Cukierman and Tommasi in their AER paper “Why does it take a Nixon to go to China?” (working paper version).

Suppose an elected politician knows the true chances for peace but also has a bias against peace and for war.  The the median voter hears his message and decides whether to re-elect him or appoint a challenger.  Given the politician’s bias, he may falsely claim there is a good case for war even if it is not true.  It is hard for a politician biased towards war to credibly make the case for war.  He risks losing the election.  But if he makes the case for peace, it is credible: Why would a hawk prosecute for peace unless the case for peace is overwhelming?  So the more a politician proposes a policy that is against his natural bias, the higher is the chance he gets re-elected.  If the case for peace is strong, a war-biased politician can either propose war in which case he may not get re-elected and the challenger gets to choose policy.  Or he can propose peace, get re-elected and implement the “right” policy.  In equilibrium, the latter dominates and Nixon is necessary to go China, Mitterand is necessary for privatization etc…

Is this why Netanyahu believes he can make peace?   Maybe he cares about leaving a legacy as a statesman.  This would make him a less credible messenger – via the logic above, he is biased towards peace and any dovish message he sends is unreliable.  Let’s hope that the stories of his strongly his Zionist father and hawkish wife are all true.  And then Hamas should fail to derail the negotiations…And Hezbollah should fail in its efforts… And the other million stars that must align must magically find their place….

Recruit homeless people to run as candidates in an opposing party.  Steve May, Republican party operative in Arizona is recruiting three way-way-outside-the-beltway candidates to run for the Green party in a local election, expecting that Green candidates will siphon votes from Democratic candidates.

“Did I recruit candidates? Yes,” said Mr. May, who is himself a candidate for the State Legislature, on the Republican ticket. “Are they fake candidates? No way.”

To make his point, Mr. May went by Starbucks, the gathering spot of the Mill Rats, as the frequenters of Mill Avenue are known.

“Are you fake, Benjamin?” he yelled out to Mr. Pearcy, who cried out “No,” with an expletive attached.

“Are you fake, Thomas?” Mr. May shouted in the direction of Thomas Meadows, 27, a tarot card reader with less than a dollar to his name who is running for state treasurer. He similarly disagreed.

“Are you fake, Grandpa?” he said to Anthony Goshorn, 53, a candidate for the State Senate whose bushy white beard and paternal manner have earned him that nickname on the streets. “I’m real,” he replied.

Whether it is desirable to have your kid fall asleep in the car goes through cycles as they age. It’s lovely to have your infant fall asleep in the snap-out car carriers. Just move inside and the nap continues undisturbed. By the time they are toddlers and you are trying to keep a schedule, the car nap only messes things up. Eventually though, getting them to fall asleep in the car is a free lunch:  sleep they wouldn’t otherwise get, a moment of peace you wouldn’t otherwise get. Best of all at the end of a long day if you can carry them into bed you skip out on the usual nighttime madness.

Our kids are all at that age and so its a regular family joke in the car ride home that the first to fall asleep gets a prize. It sometimes even works. But I learned something on our vacation last month we went on a couple of longer then usual car trips. Someone will fall asleep first, and once that happens the contest is over. The other two have no incentives. Also, in the first-to-fall game, each child has an incentive to keep the others awake. Not good for the parents. (And this second problem persists even if you try to remedy the first by adding runner-up prizes.)

So the new game in town is last-to-sleep gets a prize. You would think that this keeps them up too long but it actually has some nice properties. Optimal play in this game has each child pretending to sleep, thereby tricking the others into thinking they can fall asleep and be the last. So there’s lots of quiet even before they fall asleep. And there’s no better way to get a tired kid to fall asleep than to have him sit still, as if sleeping, in a quiet car.

Here is Sandeep’s post on the data discussed in the New York Times about winning percentages on first and second serves in tennis.There are a few players who win with higher frequency on either the first or second serve and this is a puzzle. Daniel Khaneman even gets drawn into it.  (To be precise, we are calculating the probability she wins on the first serve and comparing that to the probability she wins conditional on getting to her second serve.  At least that is the relevant comparison, this is not made clear in the article.  Also I agree with Sandeep that the opponent must be taken into consideration but there is a lot we can say about the individual decision problem. See also Eilon Solan.)

And the question persists: would players have a better chance of winning the point, even after factoring in the sure rise in double faults, by going for it again on the second serve — in essence, hitting two first serves?

But this is the wrong way of phrasing the question and in fact by theory alone, without any data (and definitely no psychology), we can prove that most players do not want to hit two first serves.

One thing is crystal clear, your second serve should be your very best.  To formalize this, let’s model the variety of serves in a given player’s arsenal.  For our purposes it is enough to describe a serve by two numbers.  Let x be the probability that it goes in and the point is lost and let y be the probability that it goes in and the point is won.  Then x + y \leq 1 and 1 - (x +y) is the probability of a fault (the serve goes out.) The arsenal of serves is just the set of pairs (x,y) that a server can muster.

Your second serve should be the one that has the highest y among all of those in your arsenal.  There should be no consideration of “playing it safe” or “staying in the point” or “not giving away free points” beyond the extent to which those factor into maximizing y the probability of the serve going in and winning.

But it’s a jumped-to conclusion that this means your second serve should be as good as your first serve.  Because your first serve should typically be worse!

On your first serve it’s not just y that matters.  Because not all ways of not-winning are equivalent.  You have that second serve y to fall back on so if you are going to not-win on your first serve , better that it come from a faulted first serve than a serve that goes in but loses the point.  You want to leverage your second chance.

So, you want in your arsenal a serve which has a lower y than your second serve (it can’t be higher because your second serve maximizes y) in return for a lower x.  That is, you want decisive serves and you are willing to fault more often to get them.  Of course the rate of substitution matters. The best of all first serves would be one that simply lowers x to zero with no sacrifice in winning percentage y.  At the other extreme you wouldn’t want to reduce y to zero.

But at the margin, if you can reduce x at the cost of a comparatively small reduction in y you will do that. Most players can make this trade-off and this is exactly how first serves differ from second serves in practice.  First serves are bombs that often go out, second serves are rarely aces.

So when Vanderbilt tennis coach Bill Tym says

“It’s an insidious disease of backing off the second serve after they miss the first serve,” said Tym, who thinks that players should simply make a tiny adjustment in their serves after missing rather than perform an alternate service motion meant mostly to get the ball in play. “They are at the mercy of their own making.”

he might be just thinking about it backwards.  The second serve is their best serve, but nevertheless it is a “backing-off” from their first serve because their first serve is (intentionally) excessively risky.

Statistically, the implications of this strategy are

  1. The winning percentage on first serves should be lower than on second serves.
  2. First serves go in less often than second serves.
  3. Conditional on a serve going in, the winning percentage on the first serve should be higher than on second serves.

The second and third are certainly true in practice.  And these refute the idea that the second serve should use the same technique as the first serve as suggested by the Vanderbilt coach. The first is true for most servers sampled in the NY Times piece.

Imagine the game:  you and your partner are holding opposite ends of a rope which has a ribbon hanging from the middle of it.  Your goal is to keep the ribbon dangling above a certain point marked on the ground.

This game is the Tug of Peace.  Unlike a tug of war, you do not want to pull harder than your partner.  In fact you want to pull exactly as hard as she pulls.

That shouldn’t be too difficult.  But what if you feel that she is starting to tug a little harder than at first and the ribbon starts to move away from you.  You will tug back to get it back in line.

But now she feels you tugging.  If she responds, it could easily escalate into an equilibrium in which each of you tugs hard in order to counteract the other’s hard tugging.

This is metaphor for many relationship dysfunctions.  For no reason other than strategic uncertainty you get locked into a tug of peace in which each party is working hard to keep the relationship in balance.

There is an even starker game-theoretic metaphor.  Suppose that you choose simultaneously how hard you will tug and your choice is irreversible once the tugging begins.  You never know how cooperative your partner is, and so suppose there is a tiny chance that she wants the ribbon just a little bit on her side of the mark.

Ideally you would both like to tug with minimal effort just to keep the ribbon elevated.  But since there is a small probability she will tug harder than that you will tug just a little harder than that too to get the ribbon centered “on average.” Now, she knows this.  And whether or not she is cooperative she will anticipate your adjustment and tug a little harder herself.  But then you will tug all the harder.  And so on.

This little bit of incomplete information causes you both to tug as hard as you can.

I don’t mind rejection but a lot of people do.

When I want to ask someone to join me for a coffee or lunch I always send email. And its all about rejection even though I don’t mind rejection so much. The reason is that nobody can know for sure how I deal with rejection.  And almost everybody hates to reject someone who might have their feelings hurt.

If I ask face-to-face I put my friend in an awkward position.  Because every second she pauses to think about it is an incremental rejection.  Because while the delay could be just because she is thinking about scheduling, it also could be that she is searching for excuses to get out of it.

These considerations combined with her good graces mean that she feels pressure to say yes and to say yes quickly.  Even if she really does want to go.  I would rather she have the opportunity to consider it fully and I would rather not make her feel uncomfortable.

Email adds some welcome noise to the transaction.  It is never common knowledge exactly when she is able to read my email invitation.  If she gets it right away she can comfortably consider the offer and her schedule and get back to me on her own time.  And she knows that I know that… that I have no way of knowing how much time it took her to decide.

Game theorists can’t stop trashing email as a coordination device but that’s because we always think that common knowledge is desirable.  But when psychology is involved it is more often that we want to destroy common knowledge.

The atmosphere of melancholia on the show Mad Men has to broken by brief bursts of bright comedy or an undercurrent of sexual intrigue.  In this instance, the show indulged in the use of (at least) three strategic ploys to distract us from the plight of sad, newly divorced Don Draper regretting he boinked his secretary.

Draper’s ad agency SCDP is facing competition from a small entrant, say agency X (I forgot the name).  SCDP has lost some accounts and is bidding for a new contract from Honda.  Honda has put strict limits on the bid, stipulating that only a storyboard should be presented, not a filmed ad.  SCDP cannot afford to produce a filmed ad and nor can agency X.  Also, Don believes the Japanese might not appreciate the rules of their auction being broken.  He comes up with a bluff: pretend to  make a filmed ad and thereby trick agency X into making one.  The Japanese will reject them and agency X will be driven close to bankruptcy. The ploy works not because of the clichéd Japanese cultural stereotype embraced by Don but because Honda is using its own strategic ploy: it gets a better deal from its existing agency by threatening to switch to the winner from the auction.

Two players bluffing and lying.

And then another player, Dr. Faye, reveals her bluff.  She is not really married and is wearing a wedding ring to ward off unwanted male attention.  She tells Don and he wonders why she told him.  Faye smiles slightly.  We know why she revealed her hand and we wonder why Don doesn’t get read it.  Married-Winner-Don of Seasons 1 to 2 and perhaps even Season 3 would have worked it out immediately. But Single-Loser-Don of Season 4 is missing even blindingly obvious signals.  I guess codes will be broken in a later show.

First watch the video below.  The dark haired guy, Booth, has just made a big bet. He is claiming to have three-of-a-kind (fours).  If he does he would win the hand, but he might be bluffing.  The other guy, Lingren, has to decide whether to call the bet and he does something unexpected:  he asks Booth to show him one of his cards:

The strategic subtext is this:  if Booth has the third four then he wants Lingren to call.  If not, he wants him to fold.  Implicitly, Lingren is offering the following mechanism:  if you do have the third four then you won’t want me to know it because I would then fold.  So show me a card, and if it’s not a four I will call you.

What is left unsaid is what Lingren would do if Booth declined to show a card.  The spirit of the mechanism is that showing a card is the price Booth has to pay to have his bet called.  So the suggestion is that Lingren would fold if Booth is not forthcoming because that would signal that he is hiding his strong hand.

But in fact this can’t be part of the deal because it would imply exactly the opposite of Lingren’s expectations.  Booth, knowing that it would get Lingren to fold, would in fact hide his cards when he is bluffing and show a card when he actually has the three-or-a-kind (because then he gets a 50% chance of having his bet called rather than a 100% chance of Lingren folding.)

So what exactly should happen in this situation?  And did Booth really play like a genius? Leave your analysis in the comments.

Visor visit: the ever-durable Presh Talwalker.

Here is the abstract from a paper by Matthew Pearson and Burkhard Schipper:

In an experiment using two-bidder first-price sealed bid auctions with symmetric independent private values, we collected information on the female participants’ menstrual cycles. We find that women bid significantly higher than men in their menstrual and premenstrual phase but do not bid significantly different in other phases of the menstrual cycle. We suggest an evolutionary hypothesis according to which women are genetically predisposed by hormones to generally behave more riskily during their fertile phase of their menstrual cycle in order to increase the probability of conception, quality of offspring, and genetic variety.

Believe it or not, this contributes to a growing literature.

Jeff discussed a seminal game theoretic analysis of Cheap Talk in an earlier post: “Strategic Information Transmission” by Crawford and Sobel studies a decision-maker, the Receiver, who listens to a message from an informed advisor, the Sender, before making a decision.  The optimal decision for each player depends on the information held by the Sender. If the Sender and Receiver have the same preferences over the best decision, there is an equilibrium where the Sender reports his information truthfully and the Receiver makes the best possible decision.

What if the Sender is biased and wants a different decision, say a bit to the left of the Receiver’s optimal decision? Then the Sender has an incentive to lie and move the Receiver to the left and always telling the truth is no longer an equilibrium.  Crawford and Sobel show that this implies that in equilibrium information can only be conveyed in lumpy chunks and the Receiver takes the best expected decision for each chunk.  The bigger the bias, the less information can be transmitted in equilibrium and the larger each lump must be.

The Crawford-Sobel model has differences of opinion generated by differences in preferences.  But individuals who have the same preferences but different beliefs also have differences of opinion.  The Sender and Receiver may agree that if global warming is occurring drastic action should be taken to slow it down.  But the Sender may believe it is occurring while the Receiver believes it is not.  Differences in beliefs seem to create a similar bias to differences in preferences and hence one might conjecture there is little difference between them.  A lovely paper by Che and Kartik shows this is not the case.  If the Sender with a belief-based bias acquires information, his belief changes.  If signals are informative, his beliefs must move closer to the truth and his bias must go down.  If  Sender with a preference-based bias acquires information, his bias by definition does not change.  So, when there are belief-based differences in opinion, information acquisition changes the bias, indeed it reduces it.  This allows the Sender to transmit more information in equilibrium and improve the Receiver’s decision implementation (this is the Crawford-Sobel intuition but in a different model).  The Sender values this influence and has good incentives to acquire information.  Hiring an advisor with a different belief is valuable for the decision-maker, better than having a Yes-Man. Some pretty cool and fun insights.  And it is always nice when the intuition is well explained and it is related to classical work

There is lots of other subtle stuff and I am not doing justice to the paper.  You can find the paper Opinions as Incentives on Navin’s webpage.

How to allocate an indivisible object to one of three children, it’s a parent’s daily mechanism design problem. Today I used the first-response mechanism.  “Who wants X?”  And whoever says “me!” first gets it.

This dimension of screening, response time,  is absent from most theoretical treatments.  While in principle it can be modeled, it won’t arise in conventional models because “rational” agents take no time to decide what they want.

But the idea behind using it in practice is that the quicker you can commit yourself the more likely it is you value it a lot.  Of course it doesn’t work with “who wants ice cream?”   But it does make sense when its “We’ve got 3 popsicles, who wants the blue one?”  We are aiming at efficiency here since fairness is either moot (because any allocation is going to leave two out in the cold) or a part of a long-run scheme whereby each child wins with equal frequency asymptotically.

It’s not without its problems.

  1. Free disposal is hard to prevent.  Eventually the precocious child figures out to shout first and think later, reneging if she realizes she doesn’t want it.
  2. There’s also ex-post negotiation.  You might think that this can only lead to Pareto improvements but not so fast.  Child #1 can “strongly encourage” child #2 to hand over the goodies.  A trade of goods for “security” is not necessarily Pareto improving when the incentives are fully accounted for.
  3. It prevents efficient combinatorial allocation when there are externalities and/or complementarities.  Such as, “who’s going in Mommy’s car?”  A too-quick “me!” invites a version of the exposure problem if child #3 follows suit.

Still, it has its place in a parent’s repertoire of mechanisms.

Suppose in a Department in a university there are two specializations, E and T.  The Department has openings coming up over time and must hire to fill the slot when it appears or let it lapse, perhaps with some chance of getting it the following year.

The Department can hire on the “best athlete” criterion: just choose the best candidates, regardless of specialization.  Or it could have a “Noah’s Ark” approach and let in one E specialist for each T specialist (perhaps this is done intertemporally if there are less than two slots/year).  Both approaches are used in hiring in practice.  How does the best approach depend on the environment?

To think this through, let’s suppose the Department uses the best athlete criterion.  There are two problems.  First, if specialty T has lower standards than specialty E, they will propose more candidates.  They may exaggerate their quality if it is hard to assess.  Or specialty T may simply want to increase in size – there will be more people to interact with, collaborate with etc.  How should specialty E respond?  They know that of they stick to their high standards, the Department will be swamped by Ts.  So, they lower their bar for hiring, reasoning that their candidate has to be better than the marginal candidate brought in by the Ts, a weaker criterion.  In other words, the best athlete hiring system leads to a “race to the bottom”.

Hiring by the Noah’s Ark system prevents this from happening.  The two groups might have different standards or want to empire build.  But the each group is not threatened by the other as their slots are safe.  This comes at a cost – if the fraction of good candidates in each field differs from the slot allocation in the Department, it will miss out on the best possible combination of hires.  So, if the corporate culture is good enough and everyone internalizes the social welfare function, it is better to have the best athlete criterion.

This is in fact an excellent introduction to game theory full stop.  It covers strategic and extensive games, complete and incomplete information, sequential rationality, etc.  Very nicely.  And then on page 64 it gets really interesting, applying evolutionary game theory to pragmatics, a field in linguistics concerned with the contextual meaning of language.

I thank Presh Talwalker for the pointer.  Pretty soon I won’t have to do any teaching, I’ll just play YouTube clips for 90 minutes, pass out the chocolate and send them on their way.

Suppose you are selling your house and 10 potential buyers are lined up.  For whatever reason you cannot hold an auction (in fact sellers rarely do) but what you can do is make take-it-or-leave-it price demands.  To be clear:  this means that you can approach buyers in sequence proposing to each a price.  If a buyer accepts you are committed to sell and if he rejects you are committed to refuse sale to this buyer.  All buyers are ex ante identical, meaning that you while you don’t know their maximum willingness to pay, you have the same beliefs about each of them.  How do you determine the profit-maximizing price?

It is somewhat surprising that despite the symmetry, in order to maximize profits you will discriminate and charge them different prices.  What you will do is randomly order them and offer a descending sequence of prices.  The buyer who was randomly put first in the order (unlucky?) will be charged the highest price and this is an essential part of your optimal pricing policy.

Although it sounds surprising at first the intuition is pretty simple, it’s an application of the idea of option value.  When you have only one buyer left you will charge him some price p.  This price balances a tradeoff between high prices conditional on sale and the risk of having the offer rejected.  Since this is the last buyer the cost of that downside is that you will not make a sale.

Now the same tradeoff determines your price to the second-to-last buyer.  Except now the cost of having your offer rejected is lower because you will have another chance to sell.  So you are willing to take a larger chance of a rejected offer and therefore set a higher price.  Now continuing up the list, at every step the option value associated with a rejected offer increases and therefore so does the price.

OK that was easy.  Now consider a model where the seller posts prices and the buyers choose when to arrive.  This should break the symmetry if higher value buyers arrive earlier or later than lower value buyers.  And they will for two reasons.  First, nobody with a willingness to pay that is below the opening price will want to be first.  Second, even among buyers with a high willingness to pay, the higher it is the more you value the increased chance to buy relative to lower prices later.  (There is a “single-crossing property.”) The seller adjusts to this by further steepening the price path, etc.

Thanks to Toomas Hinnosaar for conversations on the topic.  Here is a paper by Liad Blumrosen and Thomas Holenstein on optimal posted prices.

Despite what you have read, theory holds up just fine.

The relationship between economic theory and experimental evidence is controversial. One could easily get the impression from reading the experimental literature that economic theory has little or no significance for explaining experimental results. The point of this essay is that this is a tremendously misleading impression. Economic theory makes strong predictions about many situations, and is generally quite accurate in predicting behavior in the laboratory. Most familiar situations where the theory is thought to fail, the failure is to properly apply the theory, and not in the theory failing to explain the evidence.

Which is not to say theory doesn’t have its problems.

That said, economic theory still needs to be strengthened to deal with experimental data: the problem is that in too many applications the theory is correct only in the sense that it has little to say about what will happen. Rather than speaking of whether the theory is correct or incorrect, the relevant question turns out to be whether it is useful or not useful. In many instances it is not useful. It may not be able to predict precisely how players will play in unfamiliar situations.4 It buries too much in individual preferences without attempting to understand how individual preferences are related to particular environments. This latter failing is especially true when it comes to preferences involving risk and time, and in preferences involving interpersonal comparisons – altruism, spite and fairness.

Sarcasm is a way of being nasty without leaving a paper trail.

If I say “No dear, of course I don’t mind waiting for you, in fact, sitting out here with the engine running is exactly how I planned to spend this whole afternoon” then the literal meaning of my words leaves me completely blameless despite their clearly understood venom.

This convention had to evolve.  If it didn’t already exist it would be invented. A world without sarcasm would be out of equilibrium.

Because if sarcasm did not exist then I have the following arbitrage opportunity: I can have a private vindictive chuckle by giving my wife that nasty retort without her knowing I was being nasty.  The dramatic irony of that is an added bonus.

That explains the invention of sarcasm.  But it evolves from there.  Once sarcasm comes into existence then the listener learns to recognize it.  This blunts the effect but doesn’t remove it altogether.  Because unless its someone who knows you very well, the listener may know that you are being sarcastic but it will not be common knowledge.  She feels a little less embarrassment about the insult if there is a chance that you don’t know that she knows that you are insulting her, or if there was some higher-order uncertainty.  If instead you had used plain language then the insult would be self-evident.

And even when its your spouse and she is very accustomed to your use of sarcasm, the convention still serves a purpose.  Now you start to use the tone of your voice to add color to the sarcasm.  You can say it in a way that actually softens the insult.  “Dinner was delicious.” A smile helps.

But you can make it even more nasty too.  Because once it becomes common knowledge that you are being sarcastic, the effect is like a piledriver.  She is lifted for the briefest of moments by the literal words and then it’s an even bigger drop from there when she detects the sarcasm and knows that you know that she knows …. that you intentionally set the piledriver in motion.

Sarcasm could be modeled using the tools of psychological game theory.

Here’s an experiment you can do that will teach you something.  Get a partner.  Think of a famous song and clap out the melody of the song as you sing it in your head.  You want your partner to be able to guess the song.

Out of ten tries how often do you think she will guess right?  Well she will guess right a lot less than that.  This is the illusion of transparency which is very nicely profiled in this post at You Are Not So Smart.  We overestimate how easily our outward expressions communicate what is in our heads.

This should be an important element of behavioral game theory because game theory is all about guessing your partner’s intentions.  As far as I know, biases in terms of estimates of others’ estimates of my strategy is untapped in behavioral game theory.  Its effects should be easily testable by having players make predictions about others’ predictions before the play of a game.

There are games where I want my partner to know my intentions.  For example I want my wife to know that I will be picking up coffee beans on the way home, so she doesn’t have to.  Of course I can always tell her, but if I overestimate my transparency we might have too little communication and mis-coordinate.

Then there are games where I want to hide my intentions.  In Rock-Scissors-Paper it shouldn’t matter.  I might think that she knows I am going to play Rock, and so at the last minute I might switch to Scissors, but this doesn’t change my overall distribution of play.

It should matter a lot in a casual game of poker.  If my opponent has a transparency illusion he will probably bluff less than he should out of fear that his bluffing is too easy to detect.  So if I know about the transparency illusion I should expect my opponent on average to bluff less often.

But, if he is also aware of the transparency illusion and he has learned to correct for it, then this changes his behavior too.  Because he knows that I am not sure whether he suffers from the illusion or not, and so by the previous paragraph he expects me to fold in the face of bluff.  So he will bluff more often.

Now, knowing this, how often should I call his bets?  What is the equilibrium when there is incomplete information about the degree of transparency illusion?

In a long game of course reputation effects come in.  I want you to believe that I have a transparency illusion so I might bluff less early on.

Usually you order a bottle of wine in a restaurant and the waiter/wine guy opens it and pours a little for you to taste.  Conventionally, you are not supposed to be deciding whether you made a good choice, just whether or not the wine is corked, i.e. spoiled due to a bottling mishap or bad handling.  In practice this itself requires a well-trained nose.

But in some restaurants, the sommelier moves first:  he tastes the wine and then tells you whether or not it is good.

Suspicions are not the only reason some people object to this practice. Others feel they are the best judges of whether a wine is flawed or not, and do not appreciate sommeliers appropriating their role.

We should notice though that it goes two ways.  There are two instances where the change of timing will matter.  First there is the case where the diner thinks the wine is bad but the sommelier does not.  Here the change of timing will lead to more people drinking wine that they would have rejected.  But that doesn’t mean they are worse off.  In fact, diners who are sufficiently convinced will still reject the wine and a sommelier whose primary goal is to keep the clientele happy will oblige.  But more often in these cases just knowing that an expert judges the wine to be drinkable will make it drinkable. On top of this psychological effect, the diner is better off because when he is uncertain he is spared the burden of sticking his neck out and suggesting that the wine may be spoiled.

But the reverse instance is by all accounts the more typical:  diners drinking corked bottles because they don’t feel confident enough to call in the wine guy.  I have heard from a master sommelier that about 10% of all bottles are corked!  Here the sommelier-moves-first regime is unambiguoulsy better for the customer because a faithful wine guy will reject the bottle for him.

Unless the incentive problem gets in the way.  Because if the sommelier is believed to be an expert acting in good faith, then he never lets you drink a corked bottle.  You rationally infer that any bottle he pours for you is not spoiled, and you accept it even if you don’t think it tastes so good.  But this leads to the Shady Sommelier Syndrome:  As long as he has the tiniest regard for the bottom line, he will shade his strategy at least a little bit, giving you bottles that he judges to be possibly, or maybe certainly just a little bit, corked.  You of course know this and now you are back to the old regime where, even after he moves first, you are still a little suspicious of the wine and now its your move.  And your bottle is already one sommelier-sip lighter.

You are a poor pleb working in a large organization.  Your career has reached a stage where you are asked to join one of two divisions, division A or division B.  You can’t avoid the choice even if you prefer the status quo – it would be bad for your career.  Each division is controlled by a boss.  Boss A is sneaky and self-serving. perhaps he is “rational” in the parlance of economics.  Even better, perhaps his strategy is quite transparent to you after a brief chat with him so you can predict his every move.  He is the Devil you know. Boss B might be rational or might be somewhat altruistic and have your best interests at heart.  He is the Devil you don’t know.  Neither boss is going anywhere soon and you have no realistic chance of further advancement.  You will be interacting frequently with the boss of the division you choose.

Which division should you join?

You face a trade-off it seems.  If you join division A, it is easier for you to play a best-response to boss A’s strategy – you can pretty much work out what it is.  If you join division B, it is harder but the fact that you don’t know can help your strategic interaction.

For example, suppose you are playing a game where “cooperation” is not an equilibrium if it is common knowledge that both players are rational – the classical story is the Prisoner’s Dilemma.  Then, the incomplete information might help you to cooperate.  If you do not cooperate, you reveal you are rational and the game collapses into joint defection.  If you cooperate, you might be able to sustain cooperation well into the future (this is the famous work of Kreps, Milgrom, Roberts and Wilson).

On the other hand, if you are playing a pure coordination game, this logic is less useful.  All you care about is the action the other player is going to take and you want to play a best response to it.  So, the division you should join depends on the structure of the later boss-pleb game.

Perhaps it is possible to frame this question in such a way that the existing reputation and game theory literature tells us if and when incomplete information should be welcomed by the pleb so you should play with the Devil you don’t know and when it is bad, so you should play with the Devil you know?

If you play tennis then you know the coordination problem.  Fumbling in your pocket to grab a ball and your rallying partner doing the same and then the kabuki dance of who’s gonna pocket the ball and who’s going to hit first?  Sometimes you coordinate, but seemingly just as often the balls are simultaneously repocketed or they cross each other at the net after you both hit.

Rallying with an odd number of balls gives you a simple coordination device.  You will always start with an unequal number of balls, and it will always be common knowledge how many each has even if the balls are in your pockets.

I used to think that the person holding 2 or more should hit first.  That’s a bad convention because after the first rally you are back to a position of symmetry.  (And a convention based on who started with two will fail the common knowledge test due to imperfect memory, especially when the rally was a long one.)

Instead, the person holding 1 ball should hit first.  Then the subgame following that first rally is trivially solved because there is only one feasible convention.

By the way, this observation is a key lemma in any solution to Tyler Cowen’s tennis ball problem.

Of course this works with any odd number of balls.  But five is worse.  It becomes too hard to keep track of so many balls and eventually you will lose common knowledge of the total number of balls in rotation.

It’s a variation on the old coordinated attack problem or Rubinstein’s electronic mail game.  But this one is much simpler and even more surprising.  It is due to my colleague Jakub Steiner and his co-author Colin Stewart.

Two generals, you and me, have to coordinate an attack on the enemy.  An attack will succeed only if we both attack at the same time and if the enemy is vulnerable.

From my position I can directly observe whether the enemy is vulnerable.  You on the other hand must send a scout and he will return at some random time. We agree that once you learn that the enemy is vulnerable, you will send a pigeon to me confirming that an attack should commence.  It will take your pigeon either one day or two to complete the trip.

Suppose that indeed the enemy is vulnerable, I observe that is the case, and on day n your pigeon arrives informing me that you know it too.  I am supposed to attack.  But will I?

Since you sent a pigeon I know that you know that the enemy is vulnerable.  But what day did you send your pigeon?  It could be either n-1 or n-2.  Suppose it was n-1, i.e. the pigeon arrived in one day.  Then you don’t know for sure that the pigeon has arrived yet.  So you don’t know that I know that you know that the enemy is vulnerable.  And that means you can’t be certain that I will attack so you will not attack.  And now since I cannot rule out that you sent the pigeon on day n-1, and if that was indeed the date you sent it you will not attack, then I will not attack either.

Thus, an attack will not occur the day I receive the pigeon.  In a certain sense this is obvious because only I know what day I receive the pigeon.  But the surprising thing is that there is no system we can use to decide the date of an attack and have it be successful.

Suppose that we have decided on some system and according to that system I am supposed to attack on date k.  What must be true for me to actually be willing to follow through?  First, I must expect you to be attacking too.  And since you will only attack if you know that the enemy is vulnerable, I will only attack if I have received your pigeon confirming that you know.

But that is not enough.  You will only attack if you know that I will attack and we just argued that this requires that I know that you know that the enemy is vulnerable.  So you will attack only if you know that I have received your pigeon.  You can only be sure of this 2 days after you sent it.  And since I need to be sure you will attack, I will only attack if I received the pigeon yesterday or earlier so that I am sure that you sent it at least 2 days ago and are therefore sure that I have already received it.

But that is still not enough.  Since we have just argued that I will only attack if I received your pigeon at least 1 day ago, you can only be certain that I will attack if you sent your pigeon at least 3 days ago.  And that is therefore necessary for you to be prepared to attack.  But now since I will attack only if I am certain that you will attack, I need to be certain that you sent your pigeon at least 3 days ago and that requires that I received your pigeon at least 2 days ago (and not only yesterday.)

This goes on.  In order for me to attack I must know that you know that I know, etc. etc. that the enemy is vulnerable.  And each additional iteration of this requires that the pigeon be sent one day earlier than the previous iteration. Eventually we run out of earlier days because today is day k.  This means that I will not attack because I cannot be sure that you are sure that (iterate k times) that the enemy is vulnerable.

An eternal puzzle is how a husband/father handles visits by his mother without agonizing conflict between the wife and her mother-in-law.  Here is my Machiavellian solution.  The husband should engineer a conflict with his mother that puts him in the wrong.  Then the wife and her mother-in-law will naturally bond in the face of a mutual enemy.  Don’t forget the key condition that the crime has to be egregious enough so the wife does not come to your defense.  This is why the conflict should not be with the wife:  your mother, being your mother,  is naturally more inclined to side with you.  Added bonus:  husband is conveniently ostracized!

FIFA experimented with a “sudden-death” overtime format during the 1998 and 2002 World Cup tournaments, but the so-called golden goal was abandoned as of 2006.  The old format is again in use in the current World Cup, in which a tie after the first 90 minutes is followed by an entire 30 minutes of extra time.

One of the cited reasons for reverting to the old system was that the golden goal made teams conservative. They were presumed to fear that attacking play would leave them exposed to a fatal counterattack.  But this analysis is questionable.  Without the golden goal attacking play also leaves a team exposed to the possibility of a nearly-insurmountable 1 goal deficit.  So the cost of attacking is nearly the same, and without the golden goal the benefit of attacking is obviously reduced.

Here is where some simple modeling can shed some light.  Suppose that we divide extra time into two periods.  Our team can either play cautiously or attack.  In the last period, if the game is tied, our team will win with probability p and lose with probability q, and with the remaining probability, the match will remain tied and go to penalties.  Let’s suppose that a penalty shootout is equivalent to a fair coin toss.

Then, assigning a value of 1 for a win and -1 for a loss, p-q is our team’s expected payoff if the game is tied going into the second period of extra time.

Now we are in the first period of extra time.  Here’s how we will model the tradeoff between attacking and playing cautiously.  If we attack, we increase by G the probability that we score a goal.  But we have to take risks to attack and so we also we increase by L the probability that they score a goal.  (To keep things simple we will assume that at most one goal will be scored in the first period of extra time.)

If we don’t attack there is some probability of a goal scored, and some probability of a scoreless first period.  So what we are really doing by attacking is taking an G-sized chunk of the probability of a scoreless first period and turning it into a one-goal advantage, and also a L-sized chunk and turning that into a one-goal deficit.  We can analyze the relative benefits of doing so in the golden goal system versus the current system.

In the golden goal system, the event of a scoreless first period leads to value p-q as we analyzed at the beginning.  Since a goal in the first period ends the game immediately, the gain from attacking is

G - L + (1-G-L)(p-q).

(A chunk of sized G-L of the probability of a scoreless first period is now decisive, and the remaining chunk will still be scoreless and decided in the second period.)  So, we will attack if

p - q \leq G - L + (1 - G - L) (p-q)

This inequality is comparing the value of the event of a scoreless first period p-q versus the value of taking a chunk of that probability and re-allocating it by attacking.  (Playing cautiously doesn’t guarantee a scoreless first period, but we have already netted out the payoff from the decisive first-period outcomes because we are focusing on the net changes G and L to the scoring probability due to attacking.)

Rearranging, we attack if

p - q \leq \frac{G-L}{G+L}.

Now, if we switch to the current system, a goal in the first period is not decisive.  Let’s write y for the probability that a team with a one-goal advantage holds onto that lead in the second period and wins.  With the remaining probability, the other team scores the tying goal and sends the match to penalties.

Now the comparison is changed because attacking only alters probability-chunks of sized yG and yL.  We attack if

p - q \leq Gy - Ly + (1 - G - L) (p-q),

which re-arranges to

p - q \leq y\frac{G-L}{G+L}

and since y < 1, the right-hand side is now smaller.  The upshot is that the set of parameter values (p,q,y,G,L) under which we prefer to attack under the current system is a strictly smaller subset of those that would lead us to attack under the golden goal system.

The golden goal encourages attacking play.  The intuition coming from the formulas is the following.  If p > q, then our team has the advantage in a second period of extra time.  In order for us to be willing to jeopardize some of that advantage by taking risks in the first period, we must win a sufficiently large mass of the newly-created first-period scoring outcomes.  The current system allows some of those outcomes (a fraction 1-y of them) to be undone by a second-period equalizer, and so the current system mutes the benefits of attacking.

And if p<q, then we are the weaker team in extra time and so we want to attack in either case.  (This is assuming G > L.  If G< L then the result is the same but the intuition is a little different.)

I haven’t checked it but I would guess that the conclusion is the same for any number of “periods” of extra time (so that we can think of a period as just representing a short interval of time.)

You (the sender) would like someone (the responder) to do you a favor, support some decision you propose or give you some resource you value.  You email the responder, asking him for help.  There is no reply.  Maybe he has an overactive Junk Mail filter or missed the email.  You email the responder again. No reply.  The first time round, you can tell yourself that maybe the responder just missed your request.  The second time, you realize the responder will not help you.  Saying Nothing is the same as saying “No”.

Why not just say No to begin with?  Initially, the responder hopes you do not send the second email.  Then, when the responder reverses roles and asks you for help, you will not hold an explicit No against him.  By the time the second email is sent and received, it is too late – at this point whether you respond or not, there is a “No” on the table and your relationship has taken a hit.  The sender will eventually learn that often no response means “No”.  Sending a second email, while clearing up the possibility the first non-response was an error, may lead to a worsening of the relationship between the two players.  So, the sender will weigh the consequences of the second email carefully and perhaps self-censor and never send it.

Then, Saying Nothing will certainly be better than Saying No for the responder and a communication norm is born.