You are currently browsing the tag archive for the ‘game theory’ tag.
There is a story in the Wall Street Journal about user ratings on web sites such as Amazon or eBay. It seems that raters are unduly generous with their stars.
One of the Web’s little secrets is that when consumers write online reviews, they tend to leave positive ratings: The average grade for things online is about 4.3 stars out of five.
And some users are fighting back:
That’s why Amazon reviewer Marc Schenker in Vancouver has become a Web-ratings vigilante. For the past several years, he has left nothing but one-star reviews for products. He has called men’s magazine Maxim a “bacchanalia of hedonism,” and described “The Diary of Anne Frank” as “very, very, very disappointing.”
I have noticed that Amazon reviewers are highly polarized with 5 stars being the most common with 1 star reiews coming in second. And in fact it makes a lot of sense. Say you think that a product is over-rated at 4.3 stars and you think that 4 stars is more appropriate. If there are more than just a few ratings, then to bring the average down to 4 you would have to give the lowest possible rating.
Once enough ratings have already been counted, subsequent raters will be effectively engaging in a tug of war. Those that want to raise the average will give 5 stars and those that want to reduce it will give 1.
Many Senators who support health care reform have made public commitments not to vote for any bill without a public option. Such pronouncements are not cheap talk. The pledge can be broken of course but constituents and fellow legislators will hold to account a Senator who breaks it.
And they can be relevant. A commitment not to vote for the Baucus bill raises the costs of proposing that bill because the pledged Senator would have to be compensated for breaking his pledge if he is going to be brought on board. In a simple bargaining game, the pledge will be made if and only if the cost of breaking the pledge is higher than the proposer is willing to pay. In this case the Baucus bill would not be proposed.
But legislative bargaining is not so simple. Each Senator has only one vote. A Senator who commits not to vote for the Baucus bill effectively moves the median voter (for that bill) one Senator to the right. This changes things in three ways by comparison to simple bargaining.
- The committed Senator will not be the median voter and so he will not be part of the bargaining.
- There is presumably a relatively small gap between the old median and the new so the costs imposed by the pre-commitment are much smaller.
- In the event that the gambit fails and the Baucus bill is proposed, it will be a worse bill from the perspective of the gambiteer (it will be farther to the right.)
This means that the commitment is a much less attractive strategy in the legislative setting and it loses much of its relevance. That is, those who are making this commitment would probably not have been willing to vote for the Baucus bill even without any pledge.
Wired reports that the Soviet Union actually had a doomsday device and kept it a secret.
“The whole point of the doomsday machine is lost if you keep it a secret!” cries Dr. Strangelove. “Why didn’t you tell the world?” After all, such a device works as a deterrent only if the enemy is aware of its existence. In the movie, the Soviet ambassador can only lamely respond, “It was to be announced at the party congress on Monday.”
So why was the US not informed about Perimeter? Kremlinologists have long noted the Soviet military’s extreme penchant for secrecy, but surely that couldn’t fully explain what appears to be a self-defeating strategic error of extraordinary magnitude.
The silence can be attributed partly to fears that the US would figure out how to disable the system. But the principal reason is more complicated and surprising. According to both Yarynich and Zheleznyakov, Perimeter was never meant as a traditional doomsday machine. The Soviets had taken game theory one step further than Kubrick, Szilard, and everyone else: They built a system to deter themselves.
By guaranteeing that Moscow could hit back, Perimeter was actually designed to keep an overeager Soviet military or civilian leader from launching prematurely during a crisis. The point, Zheleznyakov says, was “to cool down all these hotheads and extremists. No matter what was going to happen, there still would be revenge. Those who attack us will be punished.”
The logic is a tad fishy. But it is not obvious that you should reveal a doomsday device if you have one. It is impossible to prove that you have one so if it really had a deterrent effect you would announce you have one even if you don’t. So it can’t have a deterrent effect. And therefore you will always turn it off.
What you should worry about is announcing you have a doomsday device to an enemy who previously was not aware that there was such a thing. It still won’t have any deterrent effect but it will surely escalate the conflict. (via free exchange via Mallesh Pai.)
We talked a lot before about designing a scoring system for sports like tennis. There is some non-fanciful economics based on such questions. Suppose you have two candidates for promotion and you want to promote the candidate who is most talented. You can observe their output but output is a noisy signal that depends not just on talent, but also effort both of which you cannot observe directly. (Think of them as associates in a law firm. You see how much they bill but you cannot disentangle hard work from talent. You must promote one to partner where hard work matters less and talent matters more.)
How do you decide whom to promote? The question is the same as how to design a scoring system in tennis to maximize the probability that the winner is the one who is most talented.
One aspect of the optimal contest seems clear. You should let them set the rules. If a candidate knows he has high ability he should be given the option to offer a handicap to his rival. Only a truly talented candidate would be willing to offer a handicap. So if you see that candidate A is willing to offer a higher handicap than candidate B, then you should reward A.
The rub is that you have to reward A, but give B a handicap. Is it possible to do both?
If you are the owner of a large enterprise and are ready to retire, what do you do? Sell to the highest bidder. Before selling, do you want to split your firm into competing divisions and sell them off separately? No, because, that would introduce competition, reduce market power and lower the bids so the sum total is lower than what you would get for the monopoly. Searle, the drug company, sold itself off to Monsanto as one unit.
Miguel Angel Felix Gallardo, the Godfather of the Mexican illegal drug industry, lived a peaceful life as a rich monopolist. Then he was caught in 1989 and decided to sell off his business. In principle, Gallardo should sell off a monopoly just like Searle. But he did not (see end of article) The difference is that property rights are well defined in a legal business so Searle belongs to Monsanto. But Gallardo can’t commit not to sell the same thing twice as property rights are not well-defined. There is also considerable secrecy so it’s hard to know if the territory you are buying was already sold to someone else before. And after you’ve sold one bit for a surplus, you have the incentive to sell of another chuck as you ignore the negative impact of this on the first buyer.
The result is that selling illegal drug turf results in a more competitive market than the ex ante ideal. As the business is illegal anyhow, all the gangs can shoot it out to capture someone else’s territory. Exactly what’s happening now.
Let’s say you read a big book about recycling because you want to make an informed decision about whether it really makes sense to recycle. The book is loaded with facts: some pro, some con. You read it all, weigh the pluses and minuses and come away strongly convinced that recycling is a good thing.
But you are human and you can only remember so many facts. You are also a good manager so you optimally allow yourself to forget all of the facts and just remember the bottom line that you were quite convinced that you should recycle.
This is a stylized version of how we set personal policies. We have experiences, collect data, engage in debate and then come to conclusions. We remember the conclusions but not always the reasons. In most cases this is perfectly rational. The details matter only insofar as they lead us to the conclusions so as long as we remember the conclusions, we can forget about the reasons.
It has consequences however. How do you incorporate new arguments? When your spouse presents arguments against recycling, the only response you have available is “yes, that’s true but still I know recycling is the right thing to do.” And you are not just being stubborn. You are optimally responding to your limited memory of the reasons you considered carefully in the past.
In fact, we are probably built with a heuristic that hard-wires this optimal memory management. Call it cognitive-dissonance, confirmatory-bias, whatever. It is an optimal response to memory constraints to set policies and then stubbornly stick to them.
China is threatening to cut off imports of American chicken, but poultry experts have at least one reason to suspect it may be an empty threat: Many Chinese consumers would miss the scrumptious chicken feet they get from this country.
“We have these jumbo, juicy paws the Chinese really love,” said Paul W. Aho, a poultry economist and consultant, “so I don’t think they are going to cut us off.”
The story is in the New York Times.
The reason is to enable them to import cheaper cars from Japan which have the steering wheel on the right. So far the switch has not caused any accidents but public transportation has taken a hit.
All but about 18 of the Pacific island nation’s buses are banned from driving because their doors now open onto the middle of the road.
via mental floss.
Estimates are that 7-10% of the population are left-handed. But more than 20% of professional baseball players are left-handed (the figure is closer to 30% for non-pitchers.) On the other hand, among the 32 seeded players at the US Open tennis tournament, only two are lefties (about 6%.) Explain.
Or skill? It matters because many anti-gambling laws have exceptions for games of skill. From an article in the LA Times:
Other recent skirmishes include a South Carolina case in which five men were arrested in a 2006 raid on a game of Texas Hold ‘Em. They were convicted this year by a municipal court judge who said that he agreed that poker hinged on skill, but that he thought it wasn’t clear whether that was relevant under state law. The men are appealing their convictions.
In Columbia County, Pa., a judge dismissed charges in January against a man accused of running a poker game out of his garage, ruling that he hadn’t committed a crime because when skill predominates, it’s not gambling.
But in a second Pennsylvania case, a Westmoreland County jury last month rejected a man’s contention that the Texas Hold ‘Em tournaments he hosted in local fire halls were legal because they were games of skill.
Can you give an operational definition of a game of skill? Is tic-tack-toe a game of skill? (a bit of trivia: I once bet Matt Rabin I could beat him in 5 games of tic-tack-toe out of 50. I won the bet.) Is rock-scissors-paper a game of chance?
Trilby Tilt: The Volokh Conspiracy.
From Not Exactly Rocket Science, here is a writeup on an experiment comparing two systems for enforcing cooperation: punishments and rewards. Subjects were organized into groups of four who repeatedly played a simple public goods game. Each subject in the group could contribute some money to a common pool which would then be multiplied and divided equally among the group members. It is efficient for the group if all members contribute, but each individual group member would do better by free-riding: keeping his money and enjoying the benefits of the others’ contributions.
Playing this game repeatedly encourages cooperation: if one subject is seen to free-ride, the others can respond by contributing less the next time. Forseeing this, the subjects are induced to keep contributing in order to avoid such a breakdown. This is all standard.
Now what happens when at the end of each round you give the players the additional option to punish the others by spending $x in order to reduce the other’s income by $3x? As you would expect this adds to the threat and further enhances cooperation. But as it turns out, punishments are not as effective as rewards. An identical setup with the exception that spending $x increases the other’s income by $3x leads to even higher payoffs.
Here’s how to think about the game theory behind this. We start by considering all options at the players’ disposal and ask what’s the most money they can make as a group if they are nice. Then we ask an even more important question: what’s the least they can make if they are nasty. What matters for cooperation is not so much these amounts separately, but the size of the difference between these. This difference measures how strong is the threat of a breakdown in cooperation. If the difference is big enough, it will provide enough incentive to cooperate.
That is, there really is no such thing as a carrot. Its all about the stick. What the experimenters are calling a carrot is really just additional scope for cooperation. When we ask much they can earn by cooperating, taking all options into account, we add this “carrot” into the calculation: cooperation means contributing to the public good and giving rewards (remember that you pay $x to reward $3x, so the “reward” is just an extra public good added on to the original one.)
Since withdrawing the reward and literally imposing a punishment both reduce the opponent’s payoff by the same amount, the incentive to cooperate is exactly the same in the two treatments. We should therefore expect that the level of contribution to the public good (net of the reward/punishment addendum) should be the same and the extra payoffs from the reward treatment comes simply from the fact that the rewards are added on.

Aha. The left panel shows contributions to the public good net of rewards/punishments. Blue is the reward treatment, red is the punishment treatment. The right panel shows total payoffs.
Jeff Miron writes
If the CIA had convincingly foiled terrorists acts based on information from harsh interrogations, the temptation to shout it from the highest rooftops would have been overwhelming.
Thus the logical inference is that harsh interrogations have rarely, if ever, produced information of value.
Without taking a stand on the bottom-line conclusion, I wonder about the intermediate claim. If, for example, the CIA can document that torture produced critical intelligence, when would be the optimal time to release that information? There are many reasons to wait until an investigation is already underway.
- If it was already in the public record, that would be in effect a sunk-cost for prosecutors and have less effect on marginal incentives to go forward.
- Public information maximizes its galvanizing effect when the public is focused on it. Watercooler conversations are easier to start when it is common-knowledge that your cubicle-neighbor is paying attention to the same story you are.
- Passing time make even public information act less public. Again, its not the information per se, but the galvanizing effect of getting the public focused on the same facts. Over time these facts can be spun, not to mention simply forgotten.
I expect that the success stories are there as a kind of poison pill against the investigators. They will reach a point where any further progress will require that the positive results will come to light.
When animals move, forage or generally go about their lives, they provide inadvertent cues that can signal information to other individuals. If that creates a conflict of interest, natural selection will favour individuals that can suppress or tweak that information, be it through stealth, camouflage, jamming or flat-out lies. As in the robot experiment, these processes could help to explain the huge variety of deceptive strategies in the natural world.
The article at Not Exactly Rocket Science, describes an experiment in which robots competed for food at a hidden location and controlled a visible signal that could be used to reveal their location. The robots adapted their signaling strategy by a process that simulates natural selection. Eventually, the robots learned not to pay attention to others’ signals and the signals become essentially uninformative.
In a frightening new paper, Philip Munz, Ioan Hudea, Joe Imad, and Robert J. Smith say NO! It’s such scary news that the BBC covered it.
In their model, Susceptible (S) humans can turn into Zombies (Z) with probability β if they meet each other. But Zombies can also rise from dead susceptibles or the so-called Removed R at rate ς. In a mixed population with no birth, S will definitely shrink. Even if S kill Z at rate α, Z can always re-appear from R and never die off. Hence, we end up in a pure Zombie equilibrium. There is no channel for S to grow and there is a channel for Z to grow and there you have it.
Of course, if there is birth then things change. In their model, the authors look at the case where the (exogenous) birth rate Π is zero. But the birth rate should also depend on the fractions of S and Z in the population. If S is large then there should be frequent S-S encounters. Assume away gender issues for simplicity and these S-S encounters should lead to progeny. Even if the birth rate is low, it is multiplies by S-squared the chance of an S-S meeting while the zombie production rate βSZ + ςR is close to ςR if Z is close to zero. If S is large, so ΠS > ςR, this stabilizes a good S equilibrium where a small fraction of zombies does not eventually take over.
This is a small trivial extension but with a good title (“Make Love to win the Zombie War”), it would be an interesting sequel.
There is another solution: cremation is better than burial. I’m not an expert on zombies but I strongly suspect a cremated body cannot reappear in zombie form. Then, if we can kill of zombies fast enough (high α), we should be fine. Phew. But while the human race is safe, all individuals are in danger. I will not sleep well tonight.
(Hat Tip: PLL)
Chopped is a show on the Food Network where four chefs compete to win $10K. There are three knockout rounds/courses. In each round, the remaining chefs get some mystery ingredients and have 30 minutes to cook four portions of a dish. One chef is chopped each course by a panel of judges till one remains standing at the end of the dessert round.
In the show I watched tonight, the mystery ingredients in the first round were merguez sausage, broccoli and chives. Chef Ming from Le Cirque tried to make chive crepes with a sausage and broccoli stuffing and a milk-broccoli stem sauce. He used a fancy technique where he turned a frying pan upside down and cooked the crepe on the bottom of the pan. He ran out of time and did not make the sauce. Crepes turned out crap. Basically things did not go too well and he was “chopped”. Far weaker chefs made it to the next round. But Ming’s strategy was wrong: he was one of the best chefs. If he had not cooked a hard dish but a safe dish he would have made it into the next round. This got me thinking about the optimal strategy for the game. Here is my conjecture.
To win you have to cook at least one “home run” dish and two good dishes. The third and final final dessert round seems to be the hardest. This time the mystery ingredients were grape leaves, sesame seeds, pickled ginger and melon! It was very challenging to make something edible with that, let alone creative and delicious. If you are lagging (i.e. your opponent has had a home run in previous round and you have not), you have to go for a home run in the dessert round. Otherwise, just do the best you can: the random choice of ingredients will play a bigger rle in your success than your own effort. Reasoning backwards, this implies that you have to go for a home run in one of the first two rounds.
In the second round is where I would try for one. If the other two are going for home runs, I could still play safety and land in the middle. I might do this if I already had a home run in the first round. But if I played safety in the first round, I have to go for it now. And it is likely that I’m in the latter scenario because in the first round you (at least if you are one of the better chefs) should not go for a home run as the only way you’re going to lose is if you come last out of four people. Only the most mediocre chef should play a risky strategy in the first round as this is the only way to win (think of the John McCain picking Sarah Palin “Hail Mary Pass” strategy when he was lagging behind). The other three should produce a nice, safe appetizer. If they are truly the best three chefs they are likely to make it to the second round in equilibrium anyway. And all three will have safety dishes. And all three should go for home runs as the desert round is not a good time to attempt a great dish.
So, Ming did not get the game strategy right and he got knocked out earlier than he should have. So future contestants take note of this blog entry. I am also willing to provide consulting for chefs if they cook a free dinner for me.
At Legoland, admission is discounted for two-year-olds. But a child must be at least three for most of the fun attractions.
At the ticket window the parents are asked how old the child is. But at the ride entrance the attendants ask the children directly.
The parents lie. The children tell the truth.
Via kottke.org, an article in New Scientist on the mathematics of gambling. One bit concerns arbitrage in online sports wagering.
Let’s say, for example, you want to bet on one of the highlights of the British sporting calendar, the annual university boat race between old rivals Oxford and Cambridge. One bookie is offering 3 to 1 on Cambridge to win and 1 to 4 on Oxford. But a second bookie disagrees and has Cambridge evens (1 to 1) and Oxford at 1 to 2.
Each bookie has looked after his own back, ensuring that it is impossible for you to bet on both Oxford and Cambridge with him and make a profit regardless of the result. However, if you spread your bets between the two bookies, it is possible to guarantee success (see diagram, for details). Having done the calculations, you place £37.50 on Cambridge with bookie 1 and £100 on Oxford with bookie 2. Whatever the result you make a profit of £12.50.
I can verify that arbitrage opportunites abound. In my research with Toomas Hinnosaar on sorophilia, we investigated an explanation involving betting. In the process we discovered that the many online bookmakers often quote very different betting lines for basketball games.
How could bookmakers open themselves up to arbitrage and still stay in business? Here is one possible story. First note that, as mentioned in the quote above, no one bookmaker is subject to a sure losing bet. The arbitrage involves placing bets at two different bookies.
Now imagine you are one of two bookmakers setting the point spread on a Clippers-Lakers game and your rival bookie has just set a spread of Lakers by 5 points. Suppose you think that is too low and that a better guess at the spread is Lakers by 8 points. What spread do you set?
Lakers by 6. You create an arbitrage opportunity. Gamblers can place two bets and create a sure thing: with you they take the Clippers and the points. With your rival they bet on the Lakers to cover. You will win as long as the Lakers win by at least 7 points, which is favorable odds for you (remember you think that Lakers by 8 is the right line.) Your rival loses as long as the the Lakers win by at least 6 points, which is unfavorable odds for your rival. You come away with (what you believe to be) a winning bet and you stick your rival with a losing bet.
Now this begs the question of why your rival stuck his neck out and posted his line early. The reason is that he gets something in return: he gets all the business from gamblers wishing to place bets early. Put differently, when you decided to wait you were trading off the loss of some business during the time his line is active and yours is not versus the gain from exploiting him if he sets (what appears to you to be) a bad line.
Since both of you have the option of playing either the “post early” or “wait and see” strategy, in equilibrium you must both be indifferent so the costs and benefits exactly offset.
Of course, with online bookmaking the time intervals we are talking about (the time only one line is active before you respond, and the time it takes him to adjust to your response, closing the gap) will be small, so the arbitrage opportunities will be fleeting. (As acknowledged in the New Scientist article.)
There is strategy involved in giving and interpreting compliments. Let’s say you hear someone play a difficult –but not too difficult– piece on the piano, and she plays it well. Is it a compliment if you tell her she played it beautifully?
That depends. You would not be impressed by the not-so-difficult piece if you knew that she was an outstanding pianist. So if you tell her you are impressed, then you are telling her that you don’t think she is an outstanding pianist. And if she is, or aspires to be, an outstanding pianist, then your attempted compliment is in fact an insult.
This means that, in most cases, the best way to compliment the highly accomplished is not to offer any compliment at all. This conveys that all of her fine accomplishments are exactly what you expected of her. But, do wait for when she really outdoes herself and then tell her so. You don’t want her to think that you are someone who just never gives compliments. Once that is taken care of, she will know how to properly interpret your usual silence.
In the world of blogs, when you comment on an article on another blog, it is usually a nice compliment to provide a link to the original post. This is a compliment because it tells your readers that the other blog is worth visiting and reading. But you may have noticed that discussions of the really well-known blogs don’t come with links. For example, when I comment on an article posted at a blog like Marginal Revolution, I usually write merely “via MR, …” with no link.
That’s the best way to compliment a blog that is, or aspires to be, really well-known. It proves that you know that your readers already know the blog in question, know how to get there, and indeed have probably already read and pondered the article being discussed.
This is a companion to our Prisoner’s Dilemma Everywhere series.
Bill Clinton just returned from North Korea with the two American journalists who were being held there. Kim Jong-il got his face time with Bill and the U.S. got two citizens back without sanctions or a war. Win-win as we say in business schools?
No, says John Bolton, former Ambassador to the U.N. The previous stand-off was doing no-one any good. Obviously it was bad for the U.S. but it was also bad for North Korea. Possible sanctions might have made it hard for the goodies the elite loves to make it into North Korea. So, the Clinton-Jong-il meeting dominates the previous situation. But Bolton has an even better situation in mind: Jong-il simply hands over the journalists without us even giving him a face-saving meeting. We threaten them with something (war? sanctions?) and this is enough to give them the incentive to cooperate without us having to give up anything at all. Some might argue we are pretty close to this equilibrium as a “threat of sanctions plus Clinton visit” amounts to gain for very little pain?
Whatever the empirical judgements are, the theory is clear – Bolton sees the game as Chicken:
You are out for dinner and your friend is looking at the wine list and gives you “There’s a house wine and then there’s this Aussie Shiraz that’s supposed to be good, what do you think?”
How you answer depends a lot on how long you have known the person. If it was my wife asking me that I would not give it a moment’s thought and go for the Shiraz. If it was someone I know much less about then I would have to think about the budget, I would ask what the house wine was, what the prices were, etc. Then I would give my considered opinion expecting it to be appropriately weighed alongside his.
This is a typical trend in relationships over time. As we come to know one another’s preferences we exchange less and less information on routine decisions. On the one hand this is because there is less to learn, we already know each other very well. But there is a secondary force which squelches communication even when there is valuable information to exchange.
As we learn one another’s preferences, we learn where those preferences diverge. The lines of disagreement become clearer, even when the disagreement is very minor. For example, I learn that I like good wine a little bit more than my wife. Looking at the menu, she sees the price, she sees the alternatives and I know what constellation of those variables would lead her to consider the Shiraz. Now I know that I have a stronger preference for the Shiraz, so if she is even considering it that is enough information for me to know that I want it.
Sadly, my wife can think ahead and see all this. She knows that merely suggesting it will make me pro-Shiraz. She knows, therefore, that my response contains no new information and so she doesn’t even bother asking. Instead, she makes the choice unilaterally and its house wine here we come. (Of course waiters are also shrewd game theorists. They know how to spot the wine drinker at the table and hand him the wine list.)
In every relationship there will be certain routine decisions where the two parties have come to see a predictable difference of opinion. For those, in the long run there will be one party to whom decision-making is delegated and those decisions will almost always be taken unilaterally. Typically it will be the party who cares the most about a specific dimension who will be the assigned the delegate, as this is the efficient arrangement subject to these constraints.
Some relationships have a constitution that prevents delegation and formally requires a vote. Take for example, the Supreme Court. As in recent years when the composition of the court has been relatively stable, justices learn each others’ views in areas that arise frequently.
Justice Scalia can predict the opinion of Justice Ginsburg and Scalia is almost always to the right of Ginsburg. If, during delibaration, Justice Ginsburg reveals any leaning to the right, this is very strong information to Scalia that the rightist decision is the correct one. Knowing this, Ginsburg will be pushed farther to the left: she will express rightist views only in the most extreme cases when it is obvious that those are correct. And the equal and opposite reaction pushes Scalia to the right.
Eventually, the Court becomes so polarized that nearly every justice’s opinions can be predicted in advance. And in fact they will line up on a line. If Breyer is voting right then so will Kennedy, Alito, Roberts, Scalia, and Thomas. If Kennedy is voting left then so are Breyer, Souter, Ginsberg, and Stevens. Ultimately only the centrist judges (previously O’Connor, now Kennedy) are left with any flexibility and all cases are decided 5-4.
When a new guy rotates in, this can upset the equilibrium. There is something to learn about the new guy. There is reason to express opinion again, and this means that something new can be learned about the old guys too. We should see that the ordering of the old justices can be altered after the introduction of a new justice. (Don’t expect this from Sotomayor because she has such a long paper trail. Her place in line has already been figured out by all.)
A few weeks ago, Israeli warships and a nuclear submarine went through the Suez Canal. Israel is signaling that it can come within firing distance of Iran easily:
Israeli warships have passed through the [Suez] canal in the past but infrequently. The recent concentration of such sailings plainly goes beyond operational considerations into the realm of strategic signalling. To reach the proximity of Iranian waters surreptitiously, Israeli submarines based in the Mediterranean would normally sail around Africa, a voyage that takes weeks. Passage through the Suez could take about a day, albeit on the surface and therefore revealed. The Australian
There is a second signal: (Sunni) Egypt is on board with Israel’s focus on preventing the arrival of a nuclear-armed (Shia) Iran. Even Saudi Arabia is alarmed by the by the growth in the power and influence of its neighbour:
Egypt and other moderate Arab countries such as Saudi Arabia have formed an unspoken strategic alliance with Israel on the issue of Iran, whose desire for regional hegemony is as troubling to them as it is to the Jewish state. There were reports in the international media that Saudi Arabia had consented to the passage of Israeli warplanes through its air space in the event of an attack on Iran’s nuclear facilities but both Riyadh and Jerusalem have denied it. . The Australian
International politics makes for strange bedfellows.
He tottered over to the thermostat and there it was: treachery. Despite a long-fought household compromise standard of 74 degrees, someone — Adler’s suspicions instantly centered on his wife — had nudged the temperature up to 78.
For the sleepy freelance writer, it was time to set things right . . . right at 65 degrees. “I just kept pushing that down arrow,” he said of his midnight retaliation. “It was a defensive maneuver.”
The article suggests that women generally prefer higher thermostat settings than men. (It is the opposite in my household.) The focus is on air conditioning in the summer and I wonder whether this ranking reverses in the winter. (My wife prefers more moderate temperatures: cooler in the summer, warmer in the winter. )
Repeated game exam question: will this make the climate wars better or worse? Give your answers and reasons in the comments. Ushanka Shake: Knowledge Problem.
Via Marginal Revolution, here is a report on an experiment wherein top chess players played a textbook example of a game in which “rational” play is never matched in practice. 6000 chess players picked a number between 0 and 100. The winner was the player whose guess was closest to 2/3 of the average. The winner earns his guess in cash.
Nash equilibrium, or even iterative elimination of dominated strategies implies that no player will guess more than 1. (Nobody should guess more than 66, but then nobody should guess more than 44, but then …)However, in experimental trials, the winning guess is usually around 25.
Most experiments involve volunteers at Universities. Would professional chess players, being generally smarter and also trained to think strategically do “better?” Well, they didn’t. But let’s look at it more carefully.
Casual discussion of the predictions of game theory usually blur an important distinction: between playing rationally and knowing that others will play rationally. To be rational and make smart decisions is one thing, and no doubt the chess players are better at this than college students. But that doesn’t go very far because to make a rational guess just means starting with some hypothesis about how others will guess and then guess 2/3 of the average of that. What really drives a wedge between the theory and the experiments is that experimental subjects have good reason to doubt that the others are rational.
Even a rational player in the beauty contest experiment will not guess anything close to zero if he is not convinced that all of the other players are rational. For example, guessing 33 is rational if you think that most of the other players are not rational and on average they will guess the midpoint of 50.
And it is not enough just to know that everyone else is rational. If you know that everyone else is rational but you are not convinced that everyone else knows that everyone is rational then you would reasonably predict that everyone else will guess 33 and so you should guess 22.
As long as there is some doubt that others have some doubt that others have … that everyone is rational, then even a rational player will guess something far from 0.
To see the effect of this in action, suppose that 100 subjects are playing.
- 10 of them are not rational and will guess 100,
- 10 are rational (but don’t know that others are rational) and guess 66,
- everyone else is as sophisticated as you wish
Then the average guess cannot be less than (100 + 66 )/10 = about 17, and so the winning guess will be no smaller than 11. And since the winning guess will be no smaller than 11, the highly sophisticated players will not guess less than 10. But then this means that the average guess cannot be less than (100 + 66 + 88)/10 = about 25, yielding a winning guess of no more than 16! The iterated reasoning is going in the opposite direction now!
Ultimately, the 80 highly sophisticated players will guess the value x that solves
(100 + 66 + 8x)/10 = 3x/2
which is about 23. (The winning guess in the experiment involving chess players was 21.5)
In my neighborhood trash and recycling are collected separately, on different days, by different entities. On Tuesdays the trash collector drives his little trash shuttle all the way to my garage to empty the trash cans. On Wednesdays, I am required to wheel the recycle bin out to the curb to be collected by the recycling truck.
At first glance the economics would suggest the opposite. The recycling is valuable to the collector, the trash is not, so when bargaining over who has to carry the goods down the driveway, the recycling collector would seem to be in a worse bairgaining position.
But on second thought, it makes perfect sense. Can you see why? For a (admittedly obscure) hint, here is a related fact: another difference between the trash and recycling is that the recycling bin is too small to contain a typical week’s worth of recycling and most households usually have recycling overflowing and stacked next to the bin.
If you are following me on Twitter (and have I suggested recently that you should be following me on Twitter?) you will know the answer. For the rest, follow the jump.
Quoting an interview with a Somali Pirate in Wired. (Tricorne tip: Snarkmarket.)
1. Bargaining Power of Pirates
Often we know about a ship’s cargo, owners and port of origin before we even board it. That way we can price our demands based on its load. For those with very valuable cargo on board then we contact the media and publicize the capture and put pressure on the companies to negotiate for its release.
2. Bargaining Power of Foreign Negotiators
Armed men are expensive as are the laborers, accountants, cooks and khat suppliers on land. During long negotiations our men get tired and we need to rotate them out three times a week. Add to that the risk from navies attacking us and we can be convinced to lower our demands.
3. Intensity of Competitive Rivavlry
The key to our success is that we are willing to die, and the crews are not.
4. The Value of Hostages
Hostages — especially Westerners — are our only assets, so we try our best to avoid killing them. It only comes to that if they refuse to contact the ship’s owners or agencies. Or if they attack us and we need to defend ourselves.
5. The Threat of the Navy
Whenever we reach an agreement for the ransom, we send out wrong information to mislead the Navy about our exact location. We don’t want them to know where our land base is so that our guys on the ship can manage a safe escape. We have to make sure that the coast is clear of any navy ships before we leave. That said, there is no guarantee that we won’t be shot or arrested, but this has only happened once when the French Navy captured some of our back up people after the pirates left the Le Ponnant.
The governing body of international swimming competition FINA is instituting a ban on the high-tech swimsuits that have been used to set a flurry of new world records.
In the 17 months since the LZR Racer hit the market and spawned a host of imitators, more than 130 world records have fallen, including seven (in eight events) by Michael Phelps during the Beijing Olympics.
Phelps, a 14-time Olympic gold medalist, applauded FINA’s proposal that racing suits be made of permeable materials and that there be limits to how much of a swimmer’s body could be covered. The motion must be approved by the FINA Bureau when it convenes Tuesday.
I see two considerations at play here. First, they may intend to put asterisks on all of the recent records in order to effectively reinstate older records by swimmers who never had the advantage of the new suits. For example,
Ian Thorpe’s 2002 world best in the men’s 400 meters freestyle final was thought to be as good as sacred but Germany’s Paul Biedermann swam 3 minutes 40.07 to beat the mark by one hundredth of a second and take gold.
Its hard to argue with this motivation, but it necessitates a quick return to the old suits in order to give current swimmers a chance to set un-asterisked records while still at their peak. However the ban does not go into effect until 2010.
Don’t confuse this with the second likely motivation which is to put a halt to a technological arms race. That is also the motivation behind banning performance-enhancing drugs. The problem with an arms race is that every competitor will be required to arm in order to be competitive and then the ultimate result is the same level playing field but with the extra cost of the arms race.
On the other hand, allowing the arms race avoids having to legislate and litigate detailed regulations. If we just gave in and allowed performance-enhancers then we would have no drug tests, no doping boards, no scandals. If we ban the new swimsuits we still have to decide exactly which swimsuits are legal. And we go back to chest- and leg-hair shaving. Plastic surgery to streamline the skin?
Swimsuits don’t cause harm like drugs do. Since the costs are relatively low, there is a legitimate argument for allowing this arms race and avoiding having to navigate a new thicket of rules.
Never ask a woman if she is pregnant right? The explanation given to me is that if it turns out she is not pregnant you are in big trouble. But, what if I keep quiet and she really is pregnant. Then she’s thinking “he doesn’t think I am pregnant. That means he thinks I am actually fat in real life. Bastard.” So I am not sure I agree with the conventional wisdom here.
Maybe you are just being cautioned against equivocation. If you ask then you don’t know and whatever the answer is, your uncertainty reveals that you considered it a possibility that she’s fat. Under this theory the right strategy is to use your best judgement and just come out and pronounce it with no hesitation.
Kids are taught that when crossing the street, they should check for oncoming cars by looking left, then right, then left again. Why left again? Isn’t that redundant? You already looked left.
You could imagine that the advice makes sense because during the time he was looking right, cars appeared coming from the left that he did not see when he first looked left. But then wasn’t the first left-look a waste? Maybe not because at the first step if he saw cars coming from the left then he knows that he doesn’t have to look right yet. But then shouldn’t he insert a look-right at the beginning in hopes that he can pre-empt an unnecessary look-left?
I thought for a while and in the end I could not come up with a coherent explanation for the L-R-L again sequence. When you can’t find an example, you prove the counter-theorem. Here it is.
Take any stochastic process for arrival of cars. Consider the L-R-L again strategy. Consider the first instance when the strategy reveals that it is safe to cross. Let t be the moment of that instance that the L-R-L again strategy looks to the left for the second time.
Now, consider the alternative strategy R-L. This strategy begins by looking right, then when there is no car coming from the right it looks left and if there is no car coming from the left he crosses. If he is using R-L there are two possibilites.
- The traffic from the right is not clear until time t. In this case, by definition of t, he will next look left and see no traffic and cross.
- The traffic from the right clears before t. Here, he looks left and either sees clear traffic and crosses or sees traffic. In the latter case he is now in exactly the same situation as if he was following L-R-L from the beginning. He waits until the traffic from the left clears and then re-initializes R-L.
In all cases, he crosses safely no later than he would with L-R-L again, and in one case strictly sooner. That is, the strategy R-L dominates the strategy L-R-L. Three further observations.
- This does not mean that R-L is the optimal strategy. I would guess that the optimal strategy depends on the specific stochastic process for traffic. But this does say definitively that L-R-L is not optimal and is bad advice.
- He might get run over by a car if after looking left for the last time he crosses without noticing that a car has just appeared coming from the right. But this would also happen in all the same states when using L-R-L. Crossing the street is dangerous business.
- I believe that the rationale for the L-R-L advice is based on the presumption that the child will not be able to resist looking left at the beginning. Starting by looking right is very counterintuitive. Under this theory, the longhand for the advice is “Go ahead and look left at the beginning, but when you see that the traffic is clear, make sure you look right as well before crossing. And if you see traffic and have to wait for it to clear, don’t forget to look left again before starting out because a car may have appeared in the time you were looking right.”
The No Trade Theorem says that two traders with common prior beliefs will not find a mutally beneficial speculative trade provided they began with a Pareto efficient allocation. There is in fact a converse. If the traders do not share a common prior, then they can always find such a trade.
My kids demonstrated this experimentally today in the car coming home from Evanston’s Dixie Kitchen and Bait Shop (Recommended by Barack Obama!) Two kids have identical rubber alligator swag from the restaurant. 3 year old believes that 6 year old has his alligator and demands a swap. 6 year old insists that all gators are with their rightful owners. There is common knowledge that they disagree about this and therefore by Aumann’s famous theorem they do not share a common prior.
Dad takes temporary posession of both rubber reptiles. In plain view of the 6 year old, Dad pretends to switch but doesn’t. Sleight of hand deceives 3 year old. Alligators returned to original owners. Viola, Pareto improvement.
I forgot to get my commission.
To remind you, reCAPTCHA asks you to decipher two smeared words before you can register for, say, a gmail account. One of the words is being used to test whether you are a human and not a computer. The reCAPTCHA system knows the right answer for that word and checks whether you get it right. The reCAPTCHA system doesn’t know the other word and is hoping you will help figure it out. If you get the test word right, then your answer on the unkown word is assumed to be correct and used in a massive parallel process of digitizing books. The words are randomly ordered so you cannot know which is the test word.
Once you know this, you many wonder whether you can save yourself time by just filling in the first word and hoping that one is the test word. You will be right with 50% probability. And if so, you will cut your time in half. If you are unlucky, you try again, and you keep on guessing one word until you get lucky. What is the expected time from using this strategy?
Let’s assume it takes 1 second to type in one word. If you answer both words you are sure to get through at the cost of 2 seconds of your time. If you answer one word each time then with probability 1/2 you will pass in 1 second, with probability 1/4 you will pass in 2 seconds, probability 1/8 you pass in 3 seconds, etc. Then your expected time to pass is
Is this more or less than 2? Answer after the jump.
