You are currently browsing the tag archive for the ‘game theory’ tag.
In my kids’ tennis class they are getting good enough to have actual rallies. The coach feeds them a ball and has them play out points. Each rally is worth 1 point and they play to 10. To stop them from trying to hit winners on the first shot and in attempt to get them to play longer rallies, the coaches tried out an interesting rule. ”The ball must cross the net four times before the point begins. If your shot goes out before that, its 2 points for the other side.”
Amnesty –forgiving all of the current and previous violators but renewing a threat to punish future violators– always seems like a reputation fail. If we are granting amnesty today then doesn’t that signal that we will eventually be granting amnesty again in the future?
But there is at least one environment in which a once-only amnesty is incentive compatible and effective: when crime has bandwagon effects. For example, suppose there’s a stash of candy in the pantry and my kids have taken to raiding it. I catch one red-handed but I can’t punish her because she rightly points out that since everybody’s doing it she assumed we were looking the other way. A culture of candy crime had taken hold.
An amnesty (bring me your private stash and you will be forgiven) moves us from the everyone’s a criminal because everyone’s a criminal equilibrium to the one in which nobody’s a criminal. The latter is potentially stable if its easier to single out and punish a lone offender than one of many.
Spouse A (henceforth “she”, the driver) prefers the air inside the vehicle to be a little warmer than the preferred temperature of Spouse B (“he”, the navigator, not because he is a worse driver –quite the contrary– but because he is an even better passenger.) In their regular confrontation with this dilemma they are seemingly blessed with the optional dual-zone climate control in their decked out Volvo SUV.
And indeed there is an equilibrium of the dual climate-zone game in which each spouse enjoys his/her temperature bliss point. This equilibrium is unfortunately highly unstable. Because of the exchange of heat across the thermal gradient the only way each can maintain the constant target temperature is to adjust their controllers so that the air blown out their respective vents deviates slightly from that target further in the direction of the extreme. Hers must be set somewhat warmer and his somewhat cooler.
Now from that starting point, the slightest perturbation upsets the delicate balance and can set off a dangerous chain reaction. Consider for example what happens when, due to random alterations in air flow she begins to feel a bit on the cool side of her comfort zone. Her response is to adjust her controller one peg toward the red. This restores her comfort level but very soon as a result he will begin to feel the discomfort of unexpectedly hot and dry air blowing into his zone and he will react by moving his controller one peg toward the blue.
This is not likely to end well.
Star Michigan guard Trey Burke collected two personal fouls in the early minutes of the National Championship game against Louisville and he was promptly benched and sat out most of the remaining first half. The announcers didn’t bother to say why because its common wisdom that you don’t want your best players fouling out early.
But the common wisdom requires some scrutiny because on its surface it actually looks absurd. You fear your best player fouling out because then his playing time might be limited. So in response you guarantee his playing time will be limited by benching him. Jonathon Weinstein once made this point.
But just because basketball commentators, and probably even basketball coaches, don’t properly understand the rationale for the strategy doesn’t mean the strategy is unsound. In fact it follows from a very basic strategic idea: information is valuable.
Suppose the other team is scoring points at some random rate. If they are lucky they score a lot and if they are less lucky they score fewer. If the other team scores a lot your team should start shooting threes and go for short possessions to catch up. If the other team scores fewer you should go for safer shots and run down the clock. But you only know which of these you should do at the end of the game. If your best players are on the bench at that time you cannot capitalize on this information.
NEWRY, Maine — A Finnish couple has added to their victories by taking first place in the North American Wife Carrying Championship at Maine’s Sunday River ski resort.
Taisto Miettinen and Kristina Haapanen traveled from Helsinki, Finland – where they won the World Wife Carrying Championship – for Saturday’s contest. The Sun Journal (bit.ly/Q30QWq) reports that the couple finished with a time of 52.58 seconds on a course that includes hurdles, sand traps and a water hole.
The winners receive the woman’s weight in beer and five times her weight in cash.
The model: At date 0 each of N husbands decides how fat his wife should be. At date 1 they run a wife-carrying race, where the husband’s speed is given by some function f(s,w) where s is the strength of the husband, and w is the weight of his wife. The function f is increasing in its first argument and decreasing in the second. The winner gets K times his wife’s weight in cash and beer. Questions
- If the husbands are symmetric what is the equilibrium distribution of wife weights?
- Under what conditions on f does a stronger husband have a fatter wife?
- Derive the comparative statics with respect to K.
We have coordinated on April 1 as the date where everyone gets to indulge their latent desire to say something false and hope that it gets believed. The problem of course is that as a result nothing you say on April 1 is believed. Credibility has a public good aspect to it and the social optimum would conserve enough of it so that at least some of us could feed our hilarious public deception jones.
(You might argue otherwise. By reducing credibility across the board we set the stage for the truly exceptional liars to show their stuff fooling people even though everyone was expecting them to do exactly that.)
It soon becomes tempting to start making up stuff on the day before April 1, and then eventually sooner. Whether, how, and why this unfolds depends on just what are the basic forces at work here, a question about which we can only theorize. We know by revealed preference that people want to say made-up stuff. But its more than that, you want people to believe it, pass it on and then get called out for believing a fake story. And the best kind of April Fools story is the kind that is ex post so obviously fake that the foolee looks especially gullible.
All of these suggest that April 1 is the least ideal day for April Fools. So then we can ask why are April Fools pranks perpetrated on April 1 and not some other day? Is it because April 1 gives you reputational cover for reporting bogus stories? It sounds like a good theory, and if it were true we would not have to worry about March 31 Fools and the eventual Year-Round Fools. But it doesn’t seem to survive closer scrutiny. If my reputation for legitimate reporting is safe for that one day it must be because nobody expects me to do legitimate reporting that day. But then April 1 is the last day I would want to be dropping my April Fools.
Instead I think its something more subtle. The perfect April Fools prank works by first roping in the reader but then slowly revealing clues that remind her that its April Fools and you have been had. April 1 plays a crucial role in this development. The Reveal would not come with the same impact on any July 19. Indeed the best April Fools pranks tell you the date somewhere along the way. Its how you say to the reader “look at you, you forgot about April Fools, and I got you” without actually saying that. And it provides ammunition when he blindly passes the story on and his friends can say “check the date.”
So April Foolers need the following epistemic infrastructure for their pranks: 1) The possibility of surprise, 2) The expectation of surprise. And clearly it cannot be an equilibrium to have both of these if the source of #2 is to be contained in a single date April 1. Once there is too much of #2, there’s precious too little of #1.
And that’s when the pre-emption begins. By publishing your April Fool on March 31, you get more of 1), and still pretty much the same amount of 2). Until of course everybody starts doing that and the unraveling continues. (We are already getting close.)
There is one bright side to all of this. When everybody has come to expect that whatever you say on April 1 is false, you may indeed no longer be able to fool people into believing something you made up. But the flipside is a welcome opportunity for many wishing to come clean at minimum cost. You now have a day when you can tell the truth and have nobody believe you.
Dear Northwestern Economics community. I was among the first to submit my bracket and I have already chosen all 16 teams seeded #1 through #4 to be eliminated in the first round of the NCAA tournament. In case you don’t believe me:
Now that i got that out of the way, consider the following complete information strategic-form game. Someone will throw a biased coin which comes up heads with probability 5/8. Two people simultaneously make guesses. A pot of money will be divided equally among those who correctly guessed how the coin would land. (Somebody else gets the money if both guess incorrectly.)
In a symmetric equilibrium of this game the two players will randomize their guesses in such a way that each earns the same expected payoff. But now suppose that player 1 can publicly announce his guess before player 2 moves. Player 1 will choose heads and player 2′s best reply is to choose tails. By making this announcement, player 1 has increased his payoff to a 5/8 chance of winning the pot of money.
This principle applies to just about any variety of bracket-picking game, hence my announcement. In fact in the psychotic version we play in our department, the twisted-brain child of Scott Ogawa, each matchup in the bracket is worth 1000 points to be divided among all who correctly guess the winner, and the overall winner is the one with the most points. Now that all of my colleagues know that the upsets enumerated above have already been taken by me their best responses are to pick the favorites and sure they will be correct with high probability on each, but they will split the 1000 points with everyone else and I will get the full 1000 on the inevitable one or two upsets that will come from that group.
Remember how Mr. Miyagi taught The Karate Kid how to fight? Wax on/Wax off. Paint the fence. Don’t forget to breathe. A coach is the coach because he knows what the student needs to do to advance. A big problem for coaches is that the most precocious students also (naturally) think they know what they need to learn.
If Mr. Miyagi told Daniel that he needed endless repetition of certain specific hand movements to learn karate, Daniel would have rebelled and demanded to learn more and advance more quickly. Mr. Miyagi used ambiguity to evade conflict.
An artist with natural gift for expression needs to learn convention. But she may disagree with the teacher about how much time should be spent learning convention. If the teacher simply gives her exercises to do without explanation her decision to comply will be on the basis of an overall judgment of whether this teacher, on average, knows best. To instead say “You must learn conventions, here are some exercises for that” runs the risk that the student moderates the exercises in line with her own judgment about the importance of convention.
this a screenshot, from a few minutes ago (ed: last week), of bwin.com. the bets here are on goals in regular time of the barcelona-milan to be played in a little while. barcelona lost 2-0 in milan so barcelona needs at least 2 goals to force extra-time/penalty kicks. this is for the champions league.
as you can see from the screenshot barcelona winning 1-0 pays 10, 2-0 pays 7.5, 3-0 pays 8.75, while 4-0 pays 12.
what can we learn from this non-monotonicity? gamblers anticipate that barcelona’s extra incentives to score the 2-0 goal make it a more likely event than the 1-0 result (even though they have to score an extra goal!). once they have scored the 2-0, those extra incentives vanish so we are back to the intuition that a result with more goals is less likely.
How could this effect play out in real time? Here’s a model. It takes effort to increase the probability of scoring a goal. An immediate implication is that if the score is 0-0 with little time left, Barcelona will stop spending effort and the game will end 0-0. Too late in the game and it becomes so unlikely they can score two goals that the effort cost isn’t worth it. But if the score is 1-0 they will continue to spend effort beyond that point. So there is some interval near the end of the game where the conditional probability of scoring a goal is positive if the score is 1-0 but close to zero if the score is 0-0.
I would be interested in seeing some numbers calibrated to generated the betting odds above. We need three parameters. The first two are the probability of scoring a goal in a given minute of game time when Barcelona spends effort, and when it does not. The second is Barcelona’s rate of substitution between effort and win-probability. This could be expressed as follows. Over the course of a minute of play what is the minimum increase in win probability that would give Barcelona sufficient incentive to spend effort. These three parameters will determine when Barcelona stops spending effort in the 1-0 versus 0-0 scenarios and given this will then determine the probabilities of 1-0, 2-0, 3-0 etc. scores.
Arthur Robson wrote this on Facebook:
If am peacefully working out, and someone else arrives in the gym, they usually grab the TV remote to bathe in the inane chatter of preternaturally perky news shows. What if I were to arrive while they were watching TV and switched it off?
Which is a good point but still I think that a case can be made that it is morally allowed to turn on the TV but not to turn it off.
If you walk into the gym and the TV is on, that fact is a strong signal that somebody is watching it and would be harmed if you turned it off. On the other hand when the TV is off you have much less information about what people are paying attention to. You only know that nobody turned the TV on. This is consistent with everybody being indifferent to the TV being on and off.
The point being that a utilitarian calculation based only on the signal of whether the TV is on or off will always make it strictly more permissible to turn the TV on than to turn it off.
But note that the inference is a function of the moral code. And if people are following the turn-on-but-not-off code then the TV will be on even if nobody is watching it.
So what we need is an equilibrium code: a code that works even in equilbirium when it is expected to be followed by others.
There are two actions, A and B, and there are two observable types of people L and R. Everybody is the same in the following sense: for any single individual either A or B is the optimal action but which one it is depends on an unknown state of the world.
But in another sense people are heterogeneous. It is common knowledge that in the state of the world where A is best for people of type L then B is best for people of type R. And in the other state its the other way around. Each person observes a private signal that contains information about the state of the world.
Acting in isolation everybody would do exactly the same thing: pick the action that is best according to their belief (based on the private signal) about the state of the world. But now embed this in a model of social learning. People make their choices in sequence and each observes the choices made by people who went before.
Standard herding logic tells us that L’s and R’s will polarize and choose the opposite action even if they get it completely wrong (with L’s choosing the action that is best for R’s and R’s choosing the action that is best for L’s)
(A reminder of how that works. Say that an L moves first. He chooses the action that looks the best to him say A. Now suppose the next guy is an R and by chance action B looks best to him. The third guy is going to look at the previous two and infer from their choices that there is strong information that the true state is such that A is good for L’s and B is good for R’s. This information can be so strong that it swamps his one little private signal and he follows the herd: choosing A if he is L or B if he is R. This perpetuates itself with all subsequent decision makers.)
In effect the L’s choose A just because the R’s are choosing B and vice versa.
In several computer science courses at Johns Hopkins University, the grading curve was set by giving the highest score on the final an A, and then adjusting all lower scores accordingly. The students determined that if they collectively boycotted, then the highest score would be a zero, and so everyone would get an A. Amazingly, the students pulled it off:
Her analysis of the problem would be the starting point for a nice introductory example in a game theory class (although it appears what she is saying is that taking the test is weakly dominant, but I doubt that is true if there is a positive opportunity cost of time.)
Kava tembel tumble: Arthur Robson
Another good one from Scott Ogawa. It’s the Creampuff Dilemma. A college football coach has to set its pre-season non-conference schedule, thinking ahead to the end-of-season polling that decides Bowl Bids. A schedule stocked with creampuffs means lots of easy wins. But holding fixed the number of wins, a tough schedule will bolster your ranking.
Here’s Scott’s model. Each coach picks a number p between 0 and 1. He is successful (s=1) with probability p and unsuccessful (s=0) with probability 1-p. These probabilities are independent across players. (Think of these as the top teams in separate conferences. They will not be playing against each other.)
Highest s-p wins.
But Grand Central, for years now, has relied on a system meant to mitigate, if not prevent, all the crazy. It is this: The times displayed on Grand Central’s departure boards are wrong — by a full minute. This is permanent. It is also purposeful.
The idea is that passengers rushing to catch trains they’re about to miss can actually be dangerous — to themselves, and to each other. So conductors will pull out of the station exactly one minute after their trains’ posted departure times. That minute of extra time won’t be enough to disconcert passengers too much when they compare it to their own watches or smartphones … but it is enough, the thinking goes, to buy late-running train-catchers just that liiiiiitle bit of extra time that will make them calm down a bit. Fast clocks make for slower passengers. “Instead of yelling for customers to hurry up,” the Epoch Times notes, “the conductors instead tell everyone to slow down.”
Not everyone is going to equilibrate, just the regulars. But that’s exactly what you want. If you set the clock right then everyone is rushing to the train just when its departing. If you set the clock 1 minute off and everyone equilibrates then still everyone rushes to the train when its departing.
The system works because some of the people adjust to the clock and others don’t. So the rush is spread over two minutes rather than one.
According to this video by Tim Hartford, this includes: Designing wargames for Kissinger et al., helping Kubrick with Dr Strangelove, suggesting the red telephone be installed between USA and Soviet Union and trying but failing to dissuade a bombing campaign in Vietnam. On the intellectual plane (as well as game theory), decades ahead of his time in thinking about behavioral economics (because he was trying to give up smoking), doing the first agent-based model (of discrimination) and thinking about climate change. Near-fatal flaw for Nobel Committee: Not enough math.
Great for teaching. Part on mutually assured destruction vs common interest games could easily be folded into discussion of Nash vs subgame perfect equilibrium and hence credibility.
[...]there’s an Achilles’ heel in creating phrase-based passwords. It’s the fact that most English speakers will craft phrases that make sense.
Ashwini Rao and Gananand Kini at Carnegie Mellon and Birenda Jha at MIT have developed proof-of-concept password-cracking software that takes advantage of that weakness. It cracks long passwords, and beats existing cracking software, simply by following rules of English grammar.
“Using an analytical model based on parts-of-speech tagging, we show that the decrease in search space due to the presence of grammatical structures can be as high as 50 percent,” the researchers write in their paper.
Bad grammar makes for good passwords:
Instead, get creative. Try poor grammar and spelling, as in “de whippoorsnapper sashay sideway,” or get completely silly, as in “flipper flopper fliddle fladdle.”
It doesn’t matter how correct it is, as long as you can easily remember it.
The final seconds are ticking off the clock and the opposing team is lining up to kick a game winning field goal. There is no time for another play so the game is on the kicker’s foot. You have a timeout to use.
Calling the timeout causes the kicker to stand around for another minute pondering his fateful task. They call it “icing” the kicker because the common perception is that the extra time in the spotlight and the extra time to think about it will increase the chance that he chokes. On the other hand you might think that the extra time only works in the kickers favor. After all, up to this point he wasn’t sure if or when he was going to take the field and what distance he would be trying for. The timeout gives him a chance to line up the kick and mentally prepare.
What do the data say? According to this article in the Wall Street Journal, icing the kicker has almost no effect and if anything only backfires. Among all field goal attempts taken since the 2000 season when there were less than 2 minutes remaining, kickers made 77.3% of them when there was no timeout called and 79.7% when the kicker was “iced.”
So much for icing? No! Icing the kicker is a successful strategy because it keeps the kicker guessing as to when he will actually have to prepare himself to perform. The optimal use of the strategy is to randomize the decision whether to call a timeout in order to maximize uncertainty. We’ve all seen kickers, golfers, players of any type of finesse sport mentally and physically prepare themselves for a one-off performance. The mental focus required is a scarce resource. Randomizing the decision to ice the kicker forces the kicker to choose how to ration this resource between two potential moments when he will have to step up.
If you ice with probability zero he knows to focus all his attention when he first takes the field. If you ice with probability 1 he knows to save it all for the timeout. The optimal icing probability leaves him indifferent between allocating the marginal capacity of attention between the two moments and minimizes his overall probability of a successful field goal. (The overall probability is the probability of icing times the success probability conditional on icing plus the probability of not icing times the success probability conditional on icing.)
Indeed the simplest model would imply that the optimal icing strategy equalizes the kicker’s success probability conditional on icing and conditional on no icing. So the statistics quoted in the WSJ article are perfectly consistent with icing as part of an optimal strategy, properly understood.
But whatever you do, call the timeout before he gets a freebie practice kick.
“JPMorgan Chase & Co. (JPM) asked more than 2,000 current and former employees to contribute to a settlement with the U.K.’s tax authority over their use of an offshore trust for bonus payments, according to a person briefed on the situation…..
People who used JPMorgan’s trust told the FT they were asked to participate in a so-called blind auction, in which they would volunteer to pay a tax rate of their choosing.
If the auction fails to generate enough money to fund the settlement, people who submitted less than the average bid would be excluded from the deal and face a 52 percent tax rate when the trust’s assets are liquidated, the newspaper said.
People who don’t wish to participate can try to fight the government’s demand, the person briefed on the situation said.”
The rules of the auction are not 100% clear from the article. Taken at face value, there is the possibility of multiple coordination equilibria. If I expect everyone else to contribute a lot but not enough to pay off the tax debt, then I will contribute a lot too to avoid the 52% tax. If I expect everyone to contribute a little, so will I hoping people who decided not to participate or contributed less than the average bid will bear the punishment. Finally, if I expect total contributions to exceed the tax debt, I will contribute zero. Uncertainty about everyone’s willingness to pay, deep pockets etc will generate randomness and perhaps refine equilibria but leave open the possibility of multiplicity. Also, there will be positive probability that the auction does not fully recompense the tax authorities. This is also true in mixed strategy equilibria of the complete information game.
To increase contributions and guarantee success, the auction should specify that everyone who contributes more than the average bid will escape the 52% tax if total contributions are lacking. Then, people will submit more than the average just to be safe. Then, the average expected bid will go up. Then, they’ll submit even more etc.
Because talking takes time. And how much time it takes to talk depends in large part on how much time it takes to think of what you are going to say. The time spent reveals how much thinking you did. Here’s where truthtelling distinguishes itself. The time it takes to tell the truth is just the time it takes to remember what actually happened.
The time it takes to lie is the time it takes to invent a lie, check that its consistent with the facts, and invent all of the subsequent lies you are going to have to tell in order for your whole story to hang together.
Watching the Olympic Games this Summer I noticed that the volleyball competition has changed the scoring system from the old “sideout” system to what used to be called “quick score.” (This change may have happened a long time ago, I don’t watch much volleyball.) The traditional sideout scoring method increments the score only when the serving team wins a point. When the serving team loses the point the serve is awarded to the other team (a “sideout”) but the score is unchanged. This can lead to long drawn out games with repeated sideouts and little scoring. As a stopgap, in the old days, volleyball matches would switch to the quick score system after a certain amount of time has elapsed. In quick scoring a sideout earns a point for the team that gains the serve.
I always liked the sideout system, thinking of it as a characteristic volleyball rule that is compromised for expediency by the switch to quick score. Instinctively it seemed that the fact you could only score when you are serving played a big role in volleyball strategy. But when I was watching this summer it occurred to me that the two scoring systems are less different than it appeared at first.
The basic observation is that at any stage of the game sideout scores are just quick scores minus the number of sideouts. And sideouts necessarily alternate between teams so the number you are subtracting differs by at most one across the two teams. So I started to think if there was a way to characterize the mapping between scoring systems that would clarify precisely the strategic impact of the switch. And I think I figured it out.
Quick scoring is defined as follows. The team who wins a point has its score incremented by one, regardless of who was serving that point. (The serve switches when the receiving team wins a point just as in the sideout system.) The winner of the game is the first team to have a score of at least 15 (or 25 in other cases) and at least a 2 point lead. (I.e. the game continues past 15 if neither team has a two point lead.)
Quick scoring is equivalent to the following system: 28 points will be played. After 28 points (let’s call it regulation) if the score is tied (14-14) then they continue to play until some team has a 2 point advantage.
This is in turn equivalent to side-out scoring with the following amended rules. Lets refer to the team that receives serve in the first point of the game as the receiving team.
- A total of 28 ponts is played in regulation.
- At the end of play if either team is ahead by 2 points then that team wins except if
- the receiving team either scored the last point or earned a side-out in the last point and the receiving team is ahead by 1 point. In this case the receiving team wins.
If none of these conditions are met then the game continues past regulation. We define the team that has the serve in the first point past regulation as team 1 and the other team as team 2. The score is reset to 0-0. Play continues (with side-out scoring) until the first moment at which one of the following occurs.
- Team 1 has a 2 point lead, in which case team 1 is the winner.
- Team 2 has a 1 point lead, in which case team 2 is the winner.
The proof of this equivalence is below the jump. Here’s what it means. Quick scoring is not an innoccuous change in the rules to speed up play but its pretty close. Because a near identical outcome would obtain if instead of switching to quick score, we keep sideout scoring but cap the number of regulation points at 28. Its nearly, but not exactly identical because of the two scoring “epicycles” that have to be appended, namely #3 in regulation and #2 in overtime. Note that both of these wrinkles tend to benefit the receiving team. I don’t know the stats (anybody?) but it appears to me that the receiving team already has a large advantage in volleyball at the level of an individual point. You could say that an effect of sideout scoring is that it levels the playing field by giving a small overall advantage to the serving team. The switch to quick scoring eliminates that.
I wonder if there is a noticeable difference in the frequency with which the (initially) receiving team wins a volleyball game after the switch to quick scoring.
David McAdams sends this along:
I’ve created a fun and simple game-theory problem that I thought you might enjoy … This is the sort of problem you could give undergrads to find out who are the really bright ones. It might also be fun to mention (or play) in class.Problem: Find the (unique) symmetic equilibrium of “The World’s Simplest Poker Game”, played as follows:**0** two players**1** each player pays ante of $100**2** each player receives ONE card, which we can think of as independent random numbers on [0,1]**3** each player SIMULTANEOUSLY decides whether to “raise” $100 or “stay”**4A** if one player raises and the other stays, the raiser wins the pot, for net gain +$100**4B** if both raise, the players show their cards and whoever has the highest card wins for net gain +$200
**4C** if both stay, the players show their cards and whoever has the highest card wins for net gain +$100If you decide to solve this problem, please let me know how long it takes you … I’m curious how immediately obvious the answer is to you :) I have solved it myself and, I can tell you, the answer is simple and elegant.Cheers,David
N.B. My answer based on 5 minutes of thinking was wrong. I will post David’s solution over the weekend.
Update: As promised, here is David’s solution. Looks like Keith was the first to post the correct answer in the comments and thanks to Nicolas for pointing out that this example appeared in von Neumann and Morgenstern.
Here’s the explainer.
As budget negotiations get underway with the threat of sequestration looming, it’s worth recalling a basic lesson from game theory.
Consider two parties in the same vehicle speeding towards a cliff. The one who concedes, i.e. chickens out and steers the car out of danger, is the loser. Winning is better than losing but either is better than driving off the cliff. Finally, time is valuable: if you are going to concede, you prefer to do it earlier rather than later. Still you are prepared to wait if you expect your rival will concede first.
In equilibrium of this game, unless someone concedes right away there is necessarily a positive probability that they will go over the cliff.
The proof is simple. Consider player 1 and suppose his strategy is not to concede immediately. Then we will show 1′s strategy is such that if 2 never concedes there is a positive probability that 1 will also never concede and they will drive off the cliff together. To prove it, suppose the contrary: that 1′s strategy will eventually concede with probability 1 (if 2 doesn’t concede first). If that is 1′s strategy then 2′s best reply is to wait for 1 to concede. In equilibrium 2 will play such a strategy and the outcome will therefore be that 1 is the loser with probability 1. But if 1 is going to be the loser for sure anyway he should have conceded immediately. That’s a contradiction. We have shown that if 1 does not concede immediately then his strategy will allow the car to drive off the cliff with positive probability. The exact same argument applies to 2. Thus in equilibrium, if the game begins without an immediate concession there is a positive probability they will plunge from the cliff.
If you are a parent you probably know of a few kids who have life-threatening allergies. And if you are forty-something like me you probably didn’t know anybody with life-threatening food allergies when you were a kid. It seems like the prevalence of food allergies have increased ten-fold in the last thirty years. Which seems impossible.
Here’s one potential explanation. Suppose that a small percentage of people have a life-threatening allergy to, say, peanuts. And suppose that doctors begin more carefully screening kids for potential food allergies. For example, a kid who gets a rash after eating something is given a skin test or blood test. A positive test correlates with food allergy but does not conclusively demonstrate it. In addition the test cannot distinguish a mild allergy from one that is life threatening.
But life-threatening food allergies are life threatening. The risk is so great that any child with a non-negligible probability of having it should be restricted from eating peanuts. Such a child will return to school with a note from the doctor that there should be no peanuts in class because of the risk of a life-threatening allergic reaction. This is what’s knows as “being allergic to peanuts.”
This is all unassailable behavior on everybody’s part. And note that what it means is that while there continues to be just a small percentage of people who are deathly allergic to peanuts, there is a much larger percentage of people who, perfectly rightly, avoid peanuts because of the significant chance it could give them a life-threatening allergic reaction.
I’ve decided to lump speed together with all of these other (hypothesized) factors under the general heading of “Floor Stretch”. We’ll use it for an exercise in theoretical sports economics…Whatever it is that truly makes up “Floor Stretch”, it has to be sufficiently valuable that it offsets the lower raw productivity of the smaller players….
Floor Stretch, however, is really a relative function. Having 5 point guards on the floor only stretches the other team if they don’t also have 5 point guards playing. In this sense, what we really care about is the ratio of Floor Stretch between the two teams competing. Theoretically, the Floor Stretch ratio is what the raw productivity must be balanced against in order to determine the best mix of players. This, then, gets us into some classical Game Theory….
I’m too focussed on the election to digest fully. But I got this from Goolsbee’s Twitter feed today – he must be confident?
The average voter’s prior belief is that the incumbent is better than the challenger. Because without knowing anything more about either candidate, you know that the incumbent defeated a previous opponent. To the extent that the previous electoral outcome was based on the voters’ information about the candidates this is good news about the current incumbent. No such inference can be made about the challenger.
Headline events that occurred during the current incumbent’s term were likely to generate additional information about the incumbent’s fitness for office. The bigger the headline the more correlated that information is going to be among the voters. For example, a significant natural disaster such as Hurricane Katrina or Hurricane Sandy is likely to have a large common effect on how voters’ evaluate the incumbent’s ability to manage a crisis.
For exactly this reason, an event like that is bad for the incumbent on average. Because the incumbent begins with the advantage of the prior. The upside benefit of a good signal is therefore much smaller than the downside risk of a bad signal.
As I understand it, this is the theory developed in a paper by Ethan Bueno de Mesquita and Scott Ashworth, who use it to explain how events outside of the control of political leaders (like natural disasters) seem, empirically, to be blamed on incumbents. This pattern emerges in their model not because voters are confused about political accountability, but instead through the informational channel outlined above.
It occurs to me that such a model also explains the benefit of saturation advertising. The incumbent unleashes a barrage of ads to drive voters away from their televisions thus cutting them off from information and blunting the associated risks. Note that after the first Obama-Romney debate, Obama’s national poll numbers went south but they held steady in most of the battleground states where voters had already been subjected to weeks of wall-to-wall advertising.
In 1797 Johann Wolfgang von Goethe had completed a new poem Hermann and Dorothea, and he was interested in knowing and publicizing its “true worth.” So he concocted a scheme with his lawyer Mr. Bottiger and wrote this in a letter to his publisher:
I am inclined to offer Mr. Vieweg from Berlin an epic poem, Hermann and Dorothea, which will have approximately 2000 hexameters…. Concerning the royalty we will proceed as follows: I will hand over to Mr. Counsel B6ttiger a sealed note which contains my demand, and I wait for what Mr. Vieweg will suggest to offer for my work. If his offer is lower than my demand, then I take my note back, unopened, and the negotiation is broken. If, however, his offer is higher, then I will not ask for more than what is written in the note to be opened by Mr. Bottiger.
To understand this scheme first consider the alternative scenario where the publisher is told the amount demanded. Then the publisher will say yes or no depending on whether his willingness to pay (the poem’s “true worth”) exceeds or falls short of the demand. But then Goethe would never know exactly the poem’s true worth, just an upper or lower bound for it.
With the demand kept secret, the publisher’s incentives remain the same: he wants to agree to a demand that is below his willingness to pay and refuse a demand that exceeds it. Without knowing what that demand is, there is one and only one way to ensure this. The publisher should offer exactly the poem’s true worth.
Goethe had devised what is apparently the first dominant-strategy incentive compatible truthful revelation mechanism. The Vickrey auction is based on exactly this principle and so Goethe’s mechanism makes for a great starting point for teaching efficient auctions.
The Romney campaign is expanding ad buys beyond the battleground states. Is there a huge swell of enthusiasm so Romney is trying for a blowout or is it a bluff?
The traditional model of political advertizing is the Blotto game. Each candidate can divide up a budget across n states. Each candidate’s probability of winning at a location is increasing in his expenditure and decreasing in the other’s. These models are hard to solve for explicitly. What makes this election unusual is that the usual binding constraint – money – is slack in the battleground states. Instead, full employment of TV ad time and voter exhaustion with ads makes further expenditure unnecessary. But, you can still spend the money on improving your get-out-the-vote operation or to expand your ad buy to other states. Finally, you can send your candidate to a state. Your strategy varies as function of how close the race is.
If the battleground states are increasingly unlikely to be in your column, then a get out the vote strategy will not be enough to tip them back in your favor. Better to try to make some other state close by advertizing and mobilizing there. You must maintain your ad buy though in the battleground states to keep your competitor engaged so that they cannot divert resources themselves.
If the battleground states are close, then a get out the vote operation is quite useful even if ad spending is at its maximum. Better to do that than spend money in other locations where you are way behind.
If you are far ahead in the battleground states, you have to keep on spending there as your competitor is spending there either because he might win or to keep you spending there. But, cash you have sloshing around should be spent “expanding the map”. This gives you more paths to victory and also exerts a negative externality on your opponent, forcing him to divert resources including perhaps the most valuable resource of all, the candidate’s time.
So, you might spend heavily in a state even when you have little chance there. This always has the benefit of diverting your opponent’s attention. This means there is an incentive for a player to invest even if he is far ahead in the battleground states. But there is also an incentive to invest when you are behind as you need more paths to victory and expenditure on getting out the vote is less useful. So, we can’t infer Romentum from the fact that Romney is advertizing in MN and PA.
I think we can make stronger inferences by making a leap of faith and extrapolating this intuition to a state by state analysis. By comparing strategies with public polls, we can try to classify them into the three categories.
NC seems to fall into the first category for President Obama. Romney is ahead according to the polls but it gives the Obama campaign more ways to win and keeps the Romney resources stretched. Romney is roughly as far behind in MI, MN and PA as Obama is in NC. So, they play the same role for Romney as NC does for Obama. Bill Clinton and Joe Biden are campaigning in PA and MN so the Romney strategy has succeeded in diverting resources.
The most scarce resource is candidate time so we can infer a lot from the candidate recent travel and their travel plans. If the race is close in any states it would be crazy to try a diversion strategy as a candidate visit acts like a get out the vote strategy and hence has great benefits when the race is close. The President is campaigning in WI, FL, NV, VA and CO. In fact, both candidates are frequently in FL and VA. NC is a strong state for Romney because, as far as I can tell, he has no plans to visit there and nor does the President. Similarly, I don’t see any Romney pans to visit MI, MN or PA. Also, NV also seems to be out of Romney’s grasp as he has no plans to travel there. It is hard to make inferences about NH as Romney lives there so it is easy to campaign. OH has so many electoral votes that no candidate can afford not to campaign there – again no inferences can be made. Both candidates are in IA.
So, I think the state by state evidence is against Romentum. NC and NV do not seem to be in play. The rest of the battleground states are going to enjoy many candidate visits so they must be close. That’s about all I have!
- Its socially valuable for the University of Michigan measure consumer confidence and announce it even if that is an irrelevant statistic. Because otherwise somebody with less neutral motives would invent it, manipulate it, and publicize it.
- Kids are not purely selfish. They like it when they get better stuff than their siblings. To such an extent that they often feel mistreated when they see a sibling get some goodies.
- Someone should develop a behavioral theory of how people play Rock, Scissors, Paper when its common knowledge that humans can’t generate random sequences.
- The shoulder is the kludgiest joint because there are infinitely many ways to do any one movement. Almost surely you have settled into a sub-optimal way.
- I go to a million different places for lunch but at each one I always order one dish.