You are currently browsing the tag archive for the ‘game theory’ tag.
How did British PM David Cameron steel himself for the historic all-nighter last week in Brussels?
Cameron, it is said, used his tried-and-tested “full-bladder technique” to achieve maximum focus and clarity of thought throughout the gruelling nine-hour session in Brussels. During the formal dinner and subsequent horse-trading into the early hours, the prime minister remained intentionally “desperate for a pee”.
Showing a healthy disdain for ivory-tower types who claim the opposite:
Australian and American researchers examined the “effect of acute increase in urge to void on cognitive function in healthy adults”. After making eight “healthy young adults” drink two litres of water over two hours, the researchers asked them to complete a series of tasks to test their cognitive performance. They concluded from the results that an “extreme urge to void [urinate] is associated with impaired cognition”.
(Regular readers of this blog will know that I consider that a good thing.)
John Lazarev at Stanford GSB has a nice little theory paper (not his job market paper which is not little and not theory, but also nice.) It’s a model of market competition which consists of two stages. In stage one the firms simultaneously and non-cooperatively choose subsets of prices. The interpretation is that the firm is restricting itself to later choose only prices from the restricted set. After seeing the restriction sets each firm has chosen the firms then simultaneously choose prices from their respective sets.
This is a stylized model of the way “competition” works between airlines:
Almost every major US airline has independent pricing and yield (revenue) management departments. That operates as follows. The pricing department sets prices for each seating class (e.g. up to 6 non-refundable economy class fares) starting many days from the actual flight. These prices are subsequently updated very rarely. The revenue management department treats the prices as given but decides three times a day which of the fare classes to make available for purchase and which to keep closed. According to industry insiders, these departments do not actively interact with each other. Thus, there exist two stages of decision making. Effectively, the pricing department commits to a subset of prices, while the revenue management department chooses a price from this subset.
Its also a great question for a future prelim. Construct an equilibrium (subgame-perfect please) in which the firms effectively collude and earn monopoly profits.
Simple. (I will assume symmetric linear cost homogeneous product price-competition because it makes the argument simple and also quite stark: standard Bertrand pricing leads to cutthroat competitition and zero profits.) In the first stage each firm restricts to only two prices: the monopoly price and marginal cost pricing. If nobody deviates from this then all firms set the monopoly price. If anybody deviates from this by either excluding the monopoly price or including an intermediate price then all firms set the lowest price in their chosen set. All other deviations are ignored.
Its easy to check that this is a subgame perfect equilibrium and all firms earn monopoly profits. Lazarev does the same for a more general model of differentiated products price competition.
In the past few weeks Romney has dropped from 70% to under 50% and Gingrich has rocketed to 40% on the prediction markets. And in this time Obama for President has barely budged from its 50% perch. As someone pointed out on Twitter (I forget who, sorry) this is hard to understand.
For example if you think that in this time there has been no change in the conditional probabilities that either Gingrich or Romney beats Obama in the general election, then these numbers imply that the market thinks that those conditional probabilities are the same. Conversely, If you think that Gingrich has risen because his perceived odds of beating Obama have risen over the same period, then it must be that Romney’s have dropped in precisely the proportion to keep the total probability of a GOP president constant.
It’s hard to think of any public information that could have these perfectly offsetting effects. Here’s the only theory I could come up with that is consistent with the data. No matter who the Republican candidate is, he has a 50% chance of beating Obama. This is just a Downsian prediction. The GOP machine will move whoever it is to a median point in the policy space. But, and here’s the model, this doesn’t imply that the GOP is indifferent between Gingrich and Romney.
While any candidate, no matter what his baggage, can be repositioned to the Downsian sweet spot, the cost of that repositioning depends on the candidate, the opposition, and the political climate. The swing from Romney to Gingrich reflects new information about these that alters the relative cost of marketing the two candidates. Gingrich has for some reason gotten relatively cheaper.
I didn’t say it was a good theory.
Update: Rajiv Sethi reminded me that the tweet was from Richard Thaler. (And see Rajiv’s comment below.)
The Palestinians cannot get membership in the UN because the United States would use its veto. But they have other options. A month ago they were voted in by a wide margin to the United Nations Educational, Scientific and Cultural Organization, UNESCO. This compelled the United States to cease funding to UNESCO because of an American law that prohibits contributing to any organization that recognizes the Palestinian Liberation Organization which the US considers to be a terrorist arm.
The US has no veto power to prevent entry to UNESCO or 14 other special UN agencies. Quoting Gwynne Dyer:
If the Palestinians apply for membership in each of these organisations over the next year or so, they will probably get the same 88 percent majority when it comes to a vote on membership. None of the countries that defied the United States and voted Palestine into UNESCO is going to humiliate itself by changing its vote at other UN agencies. And each time, Washington will be forced by law to cease its contributions to that agency.
The United States would not actually lose its membership by stopping its financial support – at least not for a good long while – but it would lose all practical influence on these agencies, which do a great deal of the work of running the world. It would be a diplomatic disaster for Washington…
I thank Sean Brockelbank for the pointer.
Forget about Twitter as a medium for organizing protests, it is surprisingly effective as an actual protest forum. Yesterday there was a story on NPR about a flurry of protest tweets that, within just hours, got JCPenney to remove from their racks a controversial t-shirt with the slogan “I’m too pretty to do homework so I have my brother do it for me.” (I am not 100% sure but I think the parents were worried about the message this was sending to boys. Boys can be pretty too and they shouldn’t always be doing so much homework.)
Electronic communications and social networking change the power structure of protests. One subtle reason is that the act of listening to the protest is no longer public and verifiable. The organization that you are protesting against would like to commit not to listen to your protest. If we believe that there is no way to get JCPenney’s ear then no matter how much we care about boys’ self esteem we waste our effort on a futile protest. In the old days, with the exception of protests so vocal that they make the evening news, this commitment was credible. Just don’t give out any public channel through which to express your protest.
Now the channel is already there. But more importantly, the act of listening to the protest is private and not verifiable. It is impossible to commit not to listen to protests on Twitter. JCPenney would like to announce to the world that no matter how much we protest on Twitter they are just not paying attention, so don’t bother. But even if they believed that announcement worked, they would still have an incentive to monitor #JCPenney hashtags on Twitter just in case some protest happened to break out. We all know that so we don’t believe them when they say they aren’t listening.
So social media change the balance of power of protests because they give the protesters the first-mover advantage.
Beginning in February of 2012 Stanford economist Matt Jackson and computer scientist Yoav Shoham will be offering an online course in game theory. 2 hours of video lectures will be posted each week online and there will be a forum to ask questions of the instructors. Here is their introductory video.
The website where you can sign up for the course is here. Northwestern/Kellogg should do stuff like this.
Tyler Cowen passes along one:
A new technique of cybercrime is the taking hostage of data. “I think it’s going to become a more common tactic for attackers,” says Karen Schuler, Senior Managing Director of Kroll.
If the hostage taker has any credible threat then it remains credible whether or not I pay him because there is no way to prevent him from making arbitrary copies of the data. I can’t “buy them back” in any verifiable way.
The brochure (note that Kroll is a cybersecurity firm) talks about the threat of intellectual property data being stolen and the hostage taker threatening to sell it to my competitors. If you receive a call with such a threat the first thing you should do is sell your intellectual property to your competitors. There’s no way you are going to stop the thief from doing the same and you might as well get in on the profits.
On a similar note, these chartered jet passengers didn’t seem to understand the same point. (Ayam ack: Josh Gans)
Stefan Lauermann points me to a new paper, this is from the abstract:
Our analysis shows that both stake size and communication have a significant impact on the player’s likelihood to cooperate. In particular, we observe a negative correlation between stake size and cooperation. Also certain gestures, as handshakes, decrease the likelihood to cooperate. But, if players mutually promise each other to cooperate and in addition shake hands on it, the cooperation rate increases.
The standoff between Herman Cain and his accusers offers us some interesting strategy to contemplate. The accusers are muzzled by a non-disclosure agreement they signed as a part of their settlement with the National Restaurant Association, where they worked alongside Mr. Cain. But lets walk down the tree to the node where the NDA has to be enforced.
One of the accusers has gone public with the allegation. To enforce the NDA is to admit that the NDA exists, which in turn is an admission that there was a settlement, which for all practical purposes is an admission that the allegations are true. Now, back here at the beginning of the game tree, should the accusers consider this a credible threat?
Perhaps. Because the allegations will be devastating whether or not Mr. Cain confirms the settlement. By then he would have little to lose, and at that stage the presumed penalties mandated by the NDA plus plain old retribution would be motivation enough.
(Let’s admire but ultimately ignore as unrealistic the gambit of not enforcing the NDA as a way of “proving” that there is no NDA because there was never any settlement with these accusers.)
However, it appears that the settlement is actually a contract between the NRA and the accusers. If that is the case then the decision to enforce it may not be Mr. Cain’s. Does the NRA have any credible motivation to do so?
Maybe not, but in some ways this arrangement may strengthen Mr. Cain’s position. Imagine that the accuser’s lawyer holds a press conference and publicly asks “Mr. Cain, my client is subject to a Non-Disclosure agreement arising from a sexual harrassment settlement in which you were the harasser. You deny this. Since the accusations are false you should be perfectly willing to release them from the NDA. Please prove to the American people that you are telling the truth by waiving it.”
Mr. Cain wiggles out of this one by publicly saying “Yes, I have nothing to hide. The NDA should be waived” and then by privately urging the NRA to do no such thing. The NRA can of course deny that there was any agreement and indeed the agreement probably requires them that because it presumably prohibits all parties from talking about it.
I was working on a paper, writing the introduction to a new section that deals with an extension of the basic model. It’s a relevant extension because it fits many real-world applications. So naturally I started to list the many real-world applications.
“This applies to X, Y, and….” hmmm… what’s the Z? Nothing coming to mind.
But I can’t just stop with X and Y. Two examples are not enough. If I only list two examples then the reader will know that I could only think of two examples and my pretense that this extension applies to many real-world applications will be dead on arrival.
I really only need one more. Because if I write “This applies to X, Y, Z, etc.” then the Z plus the “etc.” proves that there is in fact a whole blimpload of examples that I could have listed and I just gave the first three that came to mind, then threw in the etc. to save space.
If you have ever written anything at all you know this feeling. Three equals infinity but two is just barely two.
This is largely an equilbrium phenomenon. A convention emerged according to which those who have an abundance of examples are required to prove it simply by listing three. Therefore those who have listed only two examples truly must have only two.
Three isn’t the only threshold that would work as an equilibrium. There are many possibilities such as two, four, five etc. (ha!) Whatever threshold N we settle on, authors will spend the effort to find N examples (if they can) and anything short of that will show that they cannot.
But despite the multiplicity I bet that the threshold of three did not emerge arbitrarily. Here is an experiment that illustrates what I am thinking.
Subjects are given a category and 1 minute, say. You ask them to come up with as many examples from that category they can think of in 1 minute. After the 1 minute is up and you count how many examples they came up with you then give them another 15 minutes to come up with as many as they can.
With these data we would do the following. Plot on the horizontal axis the number x of items they listed in the first minute and on the vertical axis the number E(y|x) equal to the empirical average number y of items they came up with in total conditional on having come up with x items in the first minute.
I predict that you will see an anomalous jump upwards between E(y|2) and E(y|3).
This experiment does not take into account the incentive effects that come from the threshold. The incentives are simply to come up with as many examples as possible. That is intentional. The point is that this raw statistical relation (if it holds up) is the seed for the equilibrium selection. That is, when authors are not being strategic, then three-or-more equals many more than two. Given that, the strategic response is to shoot for exactly three. The equilibrium result is that three equals infinity.
My street is a Halloween Mecca. People flock from neighboring blocks to a section of my street and to the street just North of us. (Ours is an East-West street as are most of the residential streets in the area.) And I have noticed that in other neighborhoods in the area and in other places I have lived there is usually a local, focal Halloween hub where most of the action is.
And on those blocks where most of the action is the residents expect that they will get most of the action. They stock more candy, they lavishly decorate their yards, and they host haunted houses. They even serve beer. (To the parents)
I think I have figured out why we coordinated on my street.
In a perfectly symmetric neighborhood lattice, trick-or-treating is more or less a random walk. With a town full of randomly walking trick-or-treaters every location sees on average the same amount of traffic. Inevitably, one location will randomly receive an unusually large amount of traffic, those residents will come to expect it next year, decorate their street, and reinforce the trend. Then it becomes the focal point.
In this perfectly uniform grid, any location is equally likely to become that focal point. That is the benchmark model.
But neighborhoods aren’t symmetric. One particular asymmetry in my neighborhood explains why it was more likely that my street became the focal point. Two streets to the South is a major traffic lane that breaks up the residential lattice. In terms of our Halloween random walk, that street is a reflecting barrier. People on the street just to the South of us will all be reflected to our street. In addition we will receive the usual fraction of the traffic from streets to the North. So, even before any coordination takes hold our street will see more than the average density of trick-or-treeters. For that reason we have a greater chance of becoming the focal point. And we did.
This one is just not fair:

But that’s a statistics question. Here’s the game theory question.
What percentage of students in the class will answer A) to this question?
A) Less than 50%
B) 50%
C) Greater than 50%
An auctioneer is never tempted to employ a shill bidder.
To be sure, he might want to make the winning bidder pay a higher price and using a shill bidder is one way to make that happen. For example, in an English auction the seller could shill bid until the price reaches a point where all but one bidders have dropped out. That price is the highest revenue he would have earned without shill bidding, and by shilling a little bit longer before finally dropping out, the seller could try to extract something more.
Of course, this comes at some risk for the seller because there is a chance that the high bidder will drop out before the shill bidder does and then the seller misses out on a sale. Nevertheless, a shill bidder pays off on average if the seller thinks that this small-probability loss is outweighed by the large-probability gain.
Nevertheless, the seller would never be tempted to do this.
The reason is that he could achieve exactly the same thing using reserve price. Before the auction even begins he can ask himself what he would want to do if the price rose to that level. If he decided that he would want to use a shill bidder to raise the price even further then he could bring about exactly the same effect by setting his reserve price at the desired level.
That is, a shill bidder is just a reserve price in disguise.
(ps, you don’t have to get very fancy to see why this is wrong.)
Via Vinnie Bergl, here is a post which examines pitch sequences in Major League Baseball, looking for serial correlation in the pitch quality, i.e. fastball, changeup, curve, etc. The motivating puzzle is the typical baseball lore that. e.g. the changeup “sets up” the fastball. If that were true then the batter knows he is going to face a fastball next and this reduces the pitcher’s advantage. If the pitcher benefits from being unpredictable then there should be no serial correlation. The linked post gives a cursory look at the data which shows in fact the opposite of the conventional lore: changeups are followed by changeups.
There is a problem however with the simple analysis which groups together all pitch sequences from all pitchers. Not every pitcher throws a changeup. Conditional on the first pitch being a changeup, the probability increases that the next pitch will be a changeup simply because we learn from the first pitch that we are looking at a pitcher who has a changeup in his arsenal. To correct for this the analysis would have to be carried out at the individual level.
Should we expect serial independence? If the game was perfectly stationary, yes. But suppose that after throwing the first curveball the pitcher gets a better feel for the pitch and is temporarily better at throwing a curveball. If pitches were serially independent, then the batter would not update his beliefs about the next pitch, the curveball would have just as much surprise but now slightly more raw effectiveness. That would mean that the pitcher will certainly throw a curveball again.
That’s a contradiction so there cannot be serial independence. To find the new equilibrium we need to remember that as long as the pitcher is randomizing his pitch sequence, he must be indifferent among all pitches he throws with positive probability. So we need to offset the temporary advantage of a curveball this is achieved by the batter looking for a curveball. That can only happen in equilibrium if the pitcher is indeed more likely to throw a curveball.
Thus, positive serial correlation is to be expected. Now this ignores the batter’s temporary advantage in spotting the curveball. It may be that the surprise power of a breaking pitch is reduced when the batter gets an earlier read on the rotation. After seeing the first curveball he may know what to look for next and this may in fact make a subsequent curveball less effective, ceteris paribus. This model would then imply negative serial correlation: other pitches are temporarily more effective than the curveball so the batter should be expecting something else.
That would bring us back to the conventional account. But note that the route to “setting up the fastball” was not that it makes the fastball more effective in absolute terms, but that it makes it more effective in relative terms because the curveball has become temporarily less effective.
The latter hypothesis could be tested by the following comparison. Look at curveballs that end the at bat but not the inning. The next batter will not have had the advantage of seeing the curveball up close but the pitcher still has the advantage of having thrown one. We should see positive serial correlation here, that is the first pitch to the new batter should be more likely (than average) to be a curveball. If in the data we see negative correlation overall but positive correlation in this scenario then it is evidence of the batter-experience effect.
(Update: the Fangraphs blog has re-done the analysis at the individual level and it looks like the positive correlation survives. One might still worry about batter-specific fixed effects. Maybe certain batters are more vulnerable to the junk pitches and so the first junk pitch signals that we are looking at a confrontation with such a batter.)
The number of laws grows rapidly, yet the number of regulators grows relatively slowly. There are always more laws than there are regulators to enforce them, and thus the number of regulators is the binding constraint.
The regulators face pressure to enforce the most recently issued directives, if only to avoid being fired or to limit bad publicity. On any given day, it is what they are told to do. Issuing new regulations therefore displaces the enforcement of old ones.
One rejoinder would begin by observing that the origin of the problem is that future legislators are short-run players. Given that, it may even be normatively optimal for today’s short-run legislators to speed up the pace of their own regulations so that they are in effect as long as possible before their eventual displacement by the next generation. Of course this is conditional on today’s regulation being better than the marginal old one being displaced, which is presumably the case otherwise it wouldn’t have been under consideration in the first place.
My sister-in-law asked me how many new PhDs in economics find jobs in academia (as opposed to taking private sector jobs.) I said “More than half.” Her reply surprised me, for a moment. She said “Really, that few?”
I was surprised because my answer gave her only a lower bound. “More than half” could easily mean “100%.” But after a moment I realized that my sister-in-law is very sophisticated and her response made perfect sense.
The NPR blog Planet Money is asking you to guess a number:
This is a guessing game. To play, pick a number between 0 and 100. The goal is to pick the number that’s closest to half the average of all guesses.
So, for example, if the average of all guesses were 80, the winning number would be 40.
The game will close at 11:59 p.m. Eastern time on Monday, October 10. We’ll announce the winner — and explain why we’re doing this — on Tuesday, October 11.
This is a famous game that has been used in numerous experiments investigating whether real people are as rational as game theory and economic theory assumes they are. Powerful logic suggests that you should guess the number zero:
- For sure the average will be no greater than 100 so half the average will be no greater than 50.
- Anybody who is smart enough to figure this out will guess something no greater than 50 so the average will be no greater than 50 and half the average will be no greater than 25.
- Anybody who is smart enough to figure this out will guess something no greater than 25, etc.
Of course time after time in experiments the actual guesses are very far from zero, demonstrating that people are in fact less rational than economic theory assumes.
Planet Money, however is an intelligent blog and when they analyze the results of their experiment, they won’t jump to that conclusion. They will be insightful enough to see past the straw man.
It all starts at point 2. It is true that people who are smart enough to figure out point 1 will guess something no greater than 50, but almost all of those people are also smart enough to know that there is a sizeable proportion of people who are not that smart. And thus these smart people, if they are rational, will not deduce in point 2 that the average will be no greater than 50. The induction will not take them past point 2.
In fact, some of the smartest and most rational people in the world, professional chess players, guess numbers around 23 when they play these experiments. (To be precise, the chess players were playing a version of the Beauty Contest were you are supposed to guess 2/3 of the average. Their guesses would be somewhat lower in the Planet Money version, see below.) And that is because if someone is indeed as rational as game theory and economic theory assumes she is, and also she is smart enough to know that
- Not everybody is that rational,
- Most of the rational people know that not everybody is that rational,
- Most of the rational people know that most (but not all) of the rational people know that not everybody is that rational
etc., then she will never choose anything close to zero. Indeed, according to my calculations, the ultra-rational guess in the Planet Money Beauty Contest is about 16. Here is how I came up with that number.
I think that
- About 2/5 of the Planet Money readers will be confused by the rules of the game and guess 50.
- Another 3/10 will be smart enough to know that the rational thing to do is to guess something less than 50, and reasoning as in the straw-man argument they will guess 25.
- The remaining 3/10 of the population are the really smart ones.
The roses in your garden are dead and your gardener tells you that there are bugs that have to be killed if you want the next generation of roses to survive. So you pay him to plant new roses and spray poison to keep the bugs away.
Each week he comes back and tells you that the bugs are still threatening to kill the roses and you will need to pay him again to spray the poison to keep them away. This goes on and on. At what point do you stop paying him to spray poison on your roses?
Keep in mind that if there really are bugs waiting to take over once the poison is gone, you are going to lose your roses if you stop spraying. So you are taking a big risk if you stop. On the other hand, only he really knows for sure if the bugs are threatening, you are just taking his word for it.
Now add to that the possibility that the poison is not guaranteed. You may have an infestation even in a week where he sprays. Of course this only happens if the bugs are a threat. If you spray for many weeks and you see no infestation this is a pretty good sign that the bugs are not a threat at all.
If you do stop spraying at some point, on what basis do you make that decision? Assuming he is spraying vigilantly you would optimally stop after many weeks of no infestation. You would continue for sure if one week the bugs return even though he was spraying.
But you don’t know for sure that he is actually spraying. You are paying him to do it, but you are taking his word for it that he is actually spraying. If you assume that he is doing his job and spraying vigilantly, and you therefore follow the decision rule above, and if we wants to keep his job then he won’t be spraying vigilantly after all.
So what do you do?
Close your eyes. Apparently your opponent will have an increased tendency to imitate your move increasing the chance of a draw. At least that is what is reported in this study. A blindfolded player played RSP against a sighted player and their outcomes were compared to a control treatment in which two blindfolded players played.
A draw was achieved almost exactly 1/3 of the time when the two blindfolded players met, but that rate increased to 36.3% in the blind-sighted treatment, a statistically significant difference. The authors attribute this to a sub-conscious tendency to imitate the actions of others. In particular, when the blind player completed his move more than 200 miliseconds prior to the sighted player, the sighted player had an increased tendency to play the same move.
200 miliseconds is too fast for conscious reaction but still within the time necessary for the visual signal to be sent to the brain and an impulsive response signal to be sent to the hand.
If this is true then you should be able to increase your chance of winning in RSP by holding rock until the very last opportunity and then throw paper. You will sometimes trigger an automatic imitation of your rock and win with your paper.
Are there even more draws when both players have their eyes open?
(Fez float: Not Exactly Rocket Science.)
This joke has been internetting for the past week. (Karakul kick: Noam Nissan)
Here’s the game theorists’ version: Three game theorists with identical preferences but asymmetric information walk into a bar. The server asks “Does everyone want a beer?” They respond in sequence:
- Game Theorist #1: “Yes!”
- Game Theorist #2: “Yes!”
- Game Theorist #3 “I don’t know.”
Two radio stations compete for advertisers. They run ads during 10 minute slots that they can locate anywhere within a given hour of air time. They know that listeners don’t like ads and will switch to another station to avoid them. Will their commercial times be disjoint, overlapping or will they exactly coincide?
Whatever they do, the listeners will adjust their behavior. Disjoint advertising intervals would mean that listeners, regardless of which station they are currently tuned to, will switch as soon as the ads start and always be listening to music. So that’s not an equilibrium.
Suppose they overlap. Radio station B is trying to be clever by starting its ads just a minute later than A. Those listening to radio station A switch to B when the ads start to get an extra minute of music. But when the ads start on B, the listeners know that the music will begin sooner on radio station A. But since you don’t know exactly when the ads will end, and in the meantime you have ads on either station, the time to switch to A is now. That’s not an equilibrium either.
If the ad intervals exactly coincide then listeners learn there is no point in switching. And if listeners aren’t switching then the stations can do no better than to have their ad intervals coincide. So that’s the equilibrium.
This paper by Andrew Sweeting shows empirically that stations coordinate their advertising intervals and explores the motives.
My simple model omits NPR. What programming runs on public radio during the ad intervals on commercial radio? Do commercial radio stations change their behavior during NPR pledge drives?
Pennsylvania is considering a change in how it allocates electoral votes in Presidential elections. Currently, like nearly all other states, Pennsylvania’s electoral votes are up for grabs in a winner-take-all contest. All 20 of its votes go to the candidate who receives the largest share of the popular vote in the state. The state’s Republican party, currently in control of the legislature and the governor’s office, is considering a switch to a system in which each of the 18 congressional districts in the state would award a vote to the winner in that district. (I believe the remaining two would be decided by state-wide popular vote.)
There are a number of ways to think about the incentives to switch between these two systems. One way is to ask how it will effect the overall flow of campaign dollars/favors to the state. On this score, in a state like Pennsylvania, the proportional-vote system is clearly better.
Only one Republican Presidential candidate has carried the state in the last 25 years. The cost of increasing Republican vote share by a few percentage points would be wasted in a state where Democrats begin with such a large advantage. But in a proportional system such an investment can pay off. The Republican party will now spend to compete for the marginal vote and Democrats will likely spend to defend it.
Of course the real question is how a state with a strong Democratic leaning could be expected to vote to switch to a system that will not only channel money to Republican districts but also help the Republican Presidential candidates.
Note that the opposite ranking holds in a more competitive state. If the two parties are on equal terms in a state, then a winner-take-all system gives a huge reward to a party who invests enough to gain a 1% advantage in vote-share. By contrast a proportional system offers at most a single vote in return for that same investment. Such a state maximizes its electoral spoils by sticking with winner-take-all. And with no majority party these economic incentives should dominate.
Taking stock of both of these two cases, it is not surprising that almost all states use a winner-take-all system. Indeed, Nebraska, one of the few states with a proportional system may soon switch to winner-take-all.

Usain Bolt was disqualified in the final of the 100 meters at the World Championships due to a false start. Under current rules, in place since January 2010, a single false start results in disqualification. By contrast, prior to 2003 each racer who jumped the gun would be given a warning and then disqualified after a second false start. In 2003 the rules were changed so that the entire field would receive a warning after a false start by any racer and all subsequent false starts would lead to disqualification.
Let’s start with the premise that an indispensible requirement of sprint competition is that all racers must start simultaneously. That is, a sprint is not a time trial but a head-to-head competition in which each competitor can assess his standing at any instant by comparing his and his competitors’ distance to a fixed finished line.
Then there must be penalty for a false start. The question is how to design that penalty. Our presumed edict rules out marginally penalizing the pre-empter by adding to his time, so there’s not much else to consider other than disqualification. An implicit presumption in the pre-2010 rules was that accidental false starts are inevitable and there is a trade-off between the incentive effects of disqualification and the social loss of disqualifying a racer who made an error despite competing in good faith.
(Indeed this trade-off is especially acute in high-level competitions where the definition of a false start is any racer who leaves less than 0.10 seconds after the report of the gun. It is assumed to be impossible to react that fast. But now we have a continuous variable to play with. How much more impossible is it to react within .10 seconds than to react within .11 seconds? When you admit that there is a probability p>0, increasing in the threshold, that a racer is gifted enough to reach within that threshold, the optimal incentive mechanisn picks the threshold that balances type I and type II errors. The maximum penalty is exacted when the threshold is violated.)
Any system involving warnings invites racers to try and anticipate the gun, increasing the number of false starts. But the pre- and post-2003 rules play out differently when you think strategically. Think of the costs and benefits of trying to get a slightly faster start. The warning means that the costs of a potential false start are reduced. Instead of being disqualified you are given a second chance but are placed in the dangerous position of being disqualified if you false start again. In that sense, your private incentives to time the gun are identical whether the warning applies only to you or to the entire field. But the difference lies in your treatment relative to the rest of the field. In the post-2003 system that penalty will be applied to all racers so your false start does not place you at a disadvantage.
Thus, both systems encourage quick starts but the post 2003 system encouraged them even more. Indeed there is an equilibrium in which false starts occur with probability close to 1, and after that all racers are warned. (Everyone expects everyone else to be going early, so there’s little loss from going early yourself. You’ll be subject to the warning either way.) After that ceremonial false start the race becomes identical to the current, post 2010, rule in which a single false start leads to disqualification. My reading is that equilibrium did indeed obtain and this was the reason for the rule change. You could argue that the pre 2003 system was even worse because it led to a random number of false starts and so racers had to train for two types of competition: one in which quick starts were a relevant strategy and one in which they were not.
Is there any better system? Here’s a suggestion. Go back to the 2003-2009 system with a single warning for the entire field. The problem with that system was that the penalty for being the first to false start was so low that when you expected everyone else to be timing the gun your best response was to time the gun as well. So my proposal is to modify that system slightly to mitigate this problem. Now, if racer B is the first to false start then in the restart if there is a second false start by, say racer C, then racer C and racer B are disqualified. (In subsequent restarts you can either clear the warning and start from scratch or keep the warning in place for all racers.)
Here’s a second suggestion. The racers start by pushing off the blocks. Engineer the blocks so that they slide freely along their tracks and only become fixed in place at the precise moment that the gun is fired.
(For the vapor mill, here are empirical predictions about the effect of previous rule-regimes on race outcomes:
- Comparing pre-2003, under the 2003-2009 you should see more races with at least one false start but far fewer total false starts per race. The current rules should have the least false starts.
- Controlling for trend (people get faster over time) if you consider races where there was no false start, race times should be faster 2003-2009 than pre-2003. That ranking reverses when you consider races in which there was at least one false start. Controlling for Usain Bolt, times should be unambiguously slower under current rules.)
From Paul Kedrosky, via Mallesh Pai:
In the game of Scrabble, letter tiles are drawn uniformly at random from a bag. The variability of possible draws as the game progresses is a source of variation that makes it more likely for an inferior player to win a head-to-head match against a superior player, and more difficult to determine the true ability of a player in a tournament or contest. I propose a new format for drawing tiles in a two-player game that allows for the same tile pattern though not the same board to be replicated over multiple matches, so that a players result can be better compared against others, yet is indistinguishable from the bag-based draw within a game. A large number of simulations conducted with Scrabble software shows that the variance from the tile order in this scheme accounts for as much variance as the different patterns of letters on the board as the game progresses. I use these simulations as well as the experimental design to show how much various tiles are able to affect player scores depending on their placement in the tile seeding.
Alternatively you could just let the market prices tell you.
I read the transcript and it is a very eloquent clarification of his views on game theory’s role and even the game theorist’s role. Worth checking out.
The weather in Chicago sucks but at least there are real seasons (there’s only one in SoCal where I am from.) Here’s a thought about seasons.
Everything gets old after a while. No matter how much you love it at first, after a while you are bored. So you stop doing it. But then after time passes and you haven’t done it for a while it gets some novelty back and you are willing to do it again. So you tend to go through on-off phases with your hobbies and activities.
But some activities can only be fun if enough other people are doing it too. Say going to the park for a pickup soccer game. There’s not going to be a game if nobody is there.
We could start with everyone doing it and that’s fun, but like everything else it starts to get old for some people and they cut back and before long its not much of a pickup game.
Now, unlike your solo hobbies, when the novelty comes back you go out to the field but nobody is there. This happens at random times for each person until we reach a state where everybody is keen for a regular pickup game again but there’s no game. What’s needed is a coordination device to get everyone out on the field again.
Seasons are a coordination device. At the beginning of summer everyone gets out and does that thing that they have been waiting since last year to do. Sure, by the end of the season it gets old but that’s ok summer is over. The beginning of next summer is the coordination device that gets us all out doing it again.
Hume has been locked out of the room and he is not allowed to re-enter in the form of Parfit having a dialogue with Cho and Kreps.
That’s from Tyler’s review of a book called On What Matters Vol. I (a title, which in my opinion can be gainfully edited down to “SW Swell.”)
Here is the report from Eran Shmaya, with a digression that begins with humus:
And speaking of food, the humus you get in the cafeteria near the law school is an offense to all taste and decency, though non-Israelis still enjoyed it (no surprise, it’s still better than of what you get in the states under `humus’). If you go to Jerusalem, Lina (just near the via dolorosa, where Jesus of Nazareth has walked twenty centuries ago) was pretty good. The best of the best is Ali Karawan in Jaffa, but I didn’t get to go there this time. And speaking of Jaffa, Rann Smorodinsky got our everlasting admiration for suggesting Haj Kahil for dinner. And btw, Rann didn’t invent Kalai-Smorodinsky bargaining solution when he was six, as somebody suggested to me. That Smorodinsky is the father.

A reader, Kanishka Kacker, writes to me about Cricket:
Now, very often, there are certain decisions to be made regarding whether a given batter was out or not, where it is very hard for the umpire to decide. In situations like this, some players are known to walk off the field if they know they are “out” without waiting for the umpire’s decision. Other players don’t, waiting to see the umpire’s decision.
Here is a reason given by one former Australian batsman, Michael Slater, as to why “walking” is irrational:
(this is from Mukul Kesavan’s excellent book “Men in White”)
“The pragmatic argument against walking was concisely stated by former Australian batsman Michael Slater. If you walk every time you’re out and are also given out a few times when you’re not (as is likely to happen for any career of a respectable length), things don’t even out. So, in a competitive team game, walking is, at the very least, irrational behavior. Secondarily, there is a strong likelihood that your opponents don’t walk, so every time you do, you put yourself or your team at risk.”
What do you think?
Let me begin by saying that the only thing I know about Cricket is that “Ricky Ponting” was either the right or the wrong answer to the final question in Slumdog Millionaire. Nevertheless, I will venture some answers because there are general principles at work here.
- First of all, it would be wrong to completely discount plain old honor. Kids have sportsmanship drilled into their heads from the first time they start playing, and anyone good enough to play professionally started at a time when he or she was young enough to believe that honor means something. That can be a hard doctrine to shake. Plus, as players get older and compete in at more selective levels, some of that selection is on the basis of sportsmanship. So there is some marginal selection for honorable players to make it to the highest levels.
- There is a strategic aspect to honor. It induces reciprocity in your opponent through the threat of shame. If you are honorable and walk, then when it comes time for your opponent to do the same, he has added pressure to follow suit or else appear less honorable than you. Even if he has no intrinsic honor, he may want to avoid that shame in the eyes of his fans.
- But to get to the raw strategic aspects, reputation can play a role. If a player is known to walk whenever he is out then by not walking he signals that he is not out. In those moments of indecision by the umpire, this can tip the balance and get him to make a favorable call. You might think that umpires would not be swayed by such a tactic but note that if the player has a solid reputation for walking then it is in the umpire’s interest to use this information.
- And anyway remember that the umpire doesn’t have the luxury to deliberate. When he’s on the fence, any little nudge can tilt him to a decision.
- Most importantly, a player’s reputation will have an effect on the crowd and their reactions influence umpires. If the fans know that he walks when he’s out and this time he didn’t walk they will let the umpire have it if he calls him out.
- There is a related tactic in baseball which is wh
ere the manager kicks dirt onto the umpire’s shoes to show his displeasure with the call. It is known that this will never influence the current decision but it is believed to have the effect of “getting into the umpire’s head” potentially influencing later decisions. - Finally, it is important to keep in mind that a player walks not because he knows he is out but because he is reasonably certain that the umpire is going to decide that he is out whether or not he walks. The player may be certain that he is not out but only because he is in a privileged position on the field where he can determine that. If the umpire didn’t have the same view, it would be pointless to try and persuade. Instead he should walk and invest in his reputation for the next time when the umpire is truly on the fence.
Emperor penguins form a group huddle to share warmth as they wait for eggs to hatch. How do they coordinate?
Emperor penguins are the only vertebrates that breed during the austral winter where they have to endure temperatures below −45°C and winds of up to 50 m/s while fasting. From their arrival at the colony until the eggs hatch and the return of their mates, the males, who solely incubate the eggs, fast for about 110–120 days [1]–[3]. To conserve energy and to maintain their body temperature[4], the penguins aggregate in huddles where ambient temperatures are above 0°C and can reach up to 37°C [1]–[3].
Huddling poses an interesting physical problem. If the huddle density is too low, the penguins lose too much energy. If the huddle density is too high, internal rearrangement becomes impossible, and peripheral penguins are prevented to reach the warmer huddle center. This problem is reminiscent of colloidal jamming during a fluid-to-solid transition [5]. In this paper we show that Emperor penguins prevent jamming by a recurring short-term coordination of their movements.
What are the individual incentives in the huddle? It would seem that the dynamics would be governed by the need to prevent manipulation by a self-interested penguin.
Check out this video (unfortunately you have to click to download it, its about 30MB. there is no streaming version.)





