You are currently browsing the tag archive for the ‘game theory’ tag.

Party A steals something of value to Party B and demands a ransom for its return. But once the ransom has been paid, what is to stop Party A from coming back and demanding more?

One mechanism that purchases commitment is reputation. Party A has more ransoms to extract in the future and seeks to be seen as a fair player despite being an extortionist. An interesting example is provided by Cryptowall. This “company” sends an email with a devious attachment, a virus that encrypts your harddrive if you click on it. They demand a ransom in Bitcoin to send the decryption key. The price changes over time.

The fact that they do not take your data means that they cannot come back and demand another ransom for the same data if you pay.

Because the price changes, there can be errors – you pay a ransom of 500 and by that time the price has gone up to 550 and you do not get the decryption key. What to do? A good credit card company would waive a late fee to keep a good reputation and so does Cryptowall. From the New York Times:

Use the CryptoWall message interface to tell the criminals exactly what happened. Be honest, in other words.

So she did. She explained that the virus had struck the same week that a major snowstorm hit Massachusetts and the Thanksgiving holiday shut down the banks. She told them about the unexpected Bitcoin shortfall and about dispatching her daughter to the Coin Cafe A.T.M. at the 11th hour. She swore she had really, really tried not to miss their deadline. And then a weird thing happened: Her decryption key arrived.

(HT: Alex Wearn)

Suppose there’s a precedent that people don’t like.  A case comes up and they are debating whether the precedent applies.  Often the most effective way to argue against it is to cite previous cases where the precedent was applied and argue that the present case is different.

In order to maximally differentiate the current case they will exaggerate how appropriate the precedent was to the specific details of the previous case, even though they disagree with the precedent in principle because that case was already decided and nothing can be done about that now.

The long run effect of this is to solidify those cases as being good examples where the precedent applies and thereby solidify the precedent itself.

Amazon wants to use small bricks-and-mortar retailers to sell more Kindles and eBooks. They are trying to incentivize them to execute their business strategy:

Retailers can choose between two programs:
1) Bookseller Program: Earn 10% of the price of every Kindle book purchased by their customers from their Kindle devices for two years from device purchase. This is in addition to the discount the bookseller receives when purchasing the devices and accessories from Amazon.
2) General Retail Program: Receive a larger discount when purchasing the devices from Amazon, but do not receive revenue from their customers’ Kindle book purchases.

EBooks are an existential threat to retailers. But no one small bookstore can have a significant effect on the probability of the success of the eBook market through its own choice of whether to join Amazon’s program or not. Hence, it can ignore this existential issue in making its own choice. Suppose it is beneficial for a small bookstore owner to join the program ceteris paribus. After all, people are coming in, browsing and then heading to Amazon to buy eBooks – why not capture some of that revenue? Many owners independently make the decision to join the program. Kindle and eBook penetration increases even further and small bookstores disappear.

The less you like talking on the phone the more phone calls you should make.  Assuming you are polite.

Unless the time of the call was pre-arranged the person placing the call is always going to have more time to talk than the person receiving the call simply because the caller is the one making the call.  So if you receive a call but you are too polite to make an excuse to hang up you are going to be stuck talking for a while.

So in order to avoid talking on the phone you should always be the one making the call.  Try to time it carefully.  It shouldn’t be at a time when your friend is completely unavailable to take your call because then you will have to leave a voicemail and he will eventually call you back when he has plenty of time to have a nice long conversation.

Ideally you want to catch your friend when they are just flexible enough to answer the phone but too busy to talk for very long.  That way you meet your weekly quota of phone calls at minimum cost in terms of time actually spent on the phone.  What could be more polite?

Matthew Rabin was here last week presenting his work with Erik Eyster about social learning. The most memorable theme of their their papers is what they call “anti-imitation.” It’s the subtle incentive to do the opposite of someone in your social network even if you have the same preferences and there are no direct strategic effects.

You are probably familiar with the usual herding logic. People in your social network have private information about the relative payoff of various actions. You see their actions but not their information. If their action reveals they have strong information in favor of it you should copy them even if you have private information that suggests doing the opposite.

Most people who know this logic probably equate social learning with imitation and eventual herding. But Eyster and Rabin show that the same social learning logic very often prescribes doing the opposite of people in your social network. Here is a simple intuition. Start with a different, but simpler problem.  Suppose that your friend makes an investment and his level of investment reveals how optimistic he is. His level of optimism is determined by two things, his prior belief and any private information he received.

You don’t care about his prior, it doesn’t convey any information that’s useful to you but you do want to know what information he got. The problem is the prior and the information are entangled together and just by observing his investment you can’t tease out whether he is optimistic because he was optimistic a priori or because he got some bullish information.

Notice that if somebody comes and tells you that his prior was very bullish this will lead you to downgrade your own level of optimism. Because holding his final beliefs fixed, the more optimistic was his prior the less optimistic must have been his new information and its that new information that matters for your beliefs. You want to do the opposite of his prior.

This is the basic force behind anti-imitation. (By the way I found it interesting that the English language doesn’t seem to have a handy non-prefixed word that means “doing the opposite of.”) Suppose now your friend got his prior beliefs from observing his friend. And now you see not only your friend’s investment level but his friend’s too. You have an incentive to do the opposite of his friend for exactly the same reason as above.

This assumes his friend’s action conveys no information of direct relevance for your own decision. And that leads to the prelim question. Consider a standard herding model where agents move in sequence first observing a private signal and then acting.  But add the following twist. Each agent’s signal is relevant only for his action and the action of the very next agent in line.  Agent 3 is like you in the example above.  He wants to anti-imitate agent 1. But what about agents 4,5,6, etc?

Boredom is wasted on the bored

 

I coach my daughter’s U12 travel soccer team. An important skill that a player of this age should be picking up is the instinct to keep her head up when receiving a pass, survey the landscape and plan what to do with the ball before it gets to her feet.  The game has just gotten fast enough that if she tries to do all that after the ball has already arrived she will be smothered before there is a chance.

Many drills are designed to train this instinct and today I invented a little drill that we worked on in the warmups before our game against our rivals from Deerfield, Illinois. The drill makes novel use of a trick from game theory called a jointly controlled lottery.

Imagine I am standing at midfield with a bunch of soccer balls and the players are in a single-file line facing me just outside of the penatly area.  I want to feed them the ball and have them decide as the ball approaches whether they are going to clear it to my left or to my right. In a game situation, that decision is going to be dictated by the position of their teammates and opponents on the field. But since this is just a pre-game warmup we don’t have that.  I could try to emulate it if I had some kind of signaling device on either flank and a system for randomly illuminating one of the signals just after I delivered the feed.  The player would clear to the side with the signal on.

But I don’t have that either and anyway that’s too easy and quick to read to be a good simulation of the kind of decision a player makes in a game.  So here’s where the jointly controlled lottery comes in.  I have two players volunteer to stand on either side of me to receive the clearing pass.  Just as I deliver the ball to the player in line the two girls simultaneously and randomly raise either one hand or two.  The player receiving the feed must add up the total number of hands raised and if that number is odd clear the ball to the player on my left and if it is even clear to the player on my right.

The two girls are jointly controlling a randomization device.  The parity of the number of hands is not under the control of either player.  And if each player knows that the other is choosing one or two hands with 50-50 probability, then each player knows that the parity of the total will be uniformly distributed no matter how that individual player decides to randomize her own hands.

And the nice thing about the jointly controlled lottery in this application is that the player receiving the feed must look left, look right, and think before the ball reaches her in order to be able to make the right decision as soon as it reaches her feet.

We beat Deerfield 3-0.

  1. Facebook’s business problem is that it is the social network of people you see in real life.  All the really interesting stuff you want to do and say on the internet is stuff you’d rather not share with those people or even let them know you are doing/saying.
  2. What is the rationale for offsides in soccer that doesn’t also apply to basketball?
  3. If the editors of all the journals were somehow agreeing to publish each others’ papers what patterns would we look for in the data to detect that?
  4. I need to know in advance the topic of the next 3 Gerzensee conferences so that I can start now writing papers on those topics in hopes of getting invited.

Suppose you are writing a referee report and you are recommending that the paper be rejected. You have a long list of reasons. How many should you put in your report? If you put only your few strongest arguments you run the risk that the author (or editor) finds a response to those and accepts the paper.

You will have lost the chance to use your next few strongest arguments to their full effect, even if there is a second round. The reason has to do with a basic friction of rhetoric.  Nobody really knows what’s true or false, but the more you’ve thought about it the better informed you are. So there is always a signaling aspect to rhetoric. Even if the opponent can’t find a counterargument, when it is known that you rank your argument low in terms of persuasiveness, your argument will as a result be in fact less persuasive.  Your ranking reveals that you believe that the probability is high that a counterargument could be found, even if by chance this time it wasn’t.

On the other hand you also don’t want to put all of your arguments down. The risk here is that the author refutes all but your strongest one or two arguments. Then the editor may conclude that your decision to reject was made on the basis of that long list of considerations and now that a large percentage of them have been refuted this seals the case in favor. Had you left out all the weak arguments your case would look stronger.

It may even be optimal to pick a non-interval subset of arguments. That is you might give your strongest argument, leave out the second strongest but include the third strongest. The reason is that you care not just about the probability that any single one of your arguments is refuted but the probability that a large subset of your arguments survive. And here correlation matters. It may be that a refutation of the strongest argument is likely also to partially weaken the second-strongest. You pick the third because it is orthogonal to the first.

I got this off Tim Hartford’s Twitter feed and he describes it as Prisoners’ Dilemma. I’m not so sure:

First of all, you are not allowed to give any online hints that you are playing. If you do, you cause unending shame to be heaped upon yourself. This defeats the entire purpose.

On each turn, you give your phone (which must have a Twitter client, signed in to your main Twitter account) to another player. For the first turn, you pass your phone to the person at your left, and in exchange you receive a phone from the person to your right. On the second turn, your phone is given the the person two people to your left, etc. When you’ve passed your phone to everyone around the table, the round is over.

When you receive a phone from someone else, it should have the phone’s Twitter client active, with whatever UI there is to make a new tweet. Then you enter in anything you want. Anything. There are no rules to this part. However, and this is very important: DO NOT POST yet. You may get to do that later. Instead, hand the phone back to the owner.

When you receive your phone back, look at the proposed tweet. Then hand it back to the same person who composed it.

If you don’t want them to post it, conceal a $20 bill in your hand. If you want to allow them to post it, put nothing in your hand. Making sure to hide anything that may be in your hand, put it forward onto the table. Wait until everyone has put their hand in, and then all of you must open your hands simultaneously.

If everyone has $20 in their hands, the money goes into the pot for the next round and nothing is posted.

If nobody has $20 in their hands, nothing gets posted.

If some people have $20 and some people are empty-handed, posts happen for those people who didn’t pay up, and the money (including anything already in the pot) is distributed evenly to those people who didn’t pay.

Finally, any tweets made during this game may not be erased at least until the NEXT occasion that the person plays the game.

I guess if you want people to suffer embarassment then no-one giving $20 is not an equilibrium.

Suppose you and a friend of the opposite sex are recruited for an experiment. You are brought into separate rooms and told that you will be asked some questions and, unless you give consent, all of your answers will be kept secret.

First you are asked whether you would like to hook up with your friend. Then you are asked whether you believe your friend would like to hook up with you. These are just setup questions. Now come the important ones. Assuming your friend would like to hook up with you, would you like to know that? Assuming your friend is not interested, would you like to know that? And would you like your friend to know that you know?

Assuming your friend is interested, would you like your friend to know whether you are interested? Assuming your friend is not interested, same question. And the higher-order question as well.

These questions are eliciting your preferences over you and your friend’s beliefs about (beliefs about…) you and your friend’s preferences. This is one context where the value of information is not just instrumental (i.e. it helps you make better decisions) but truly intrinsic. For example I would guess that for most people, if they are interested and they know that the other is not that they would strictly prefer that the other not know that they are interested. Because that would be embarrassing.

And I bet that if you are not interested and you know that the other is interested you would not like the other to know that you know that she is interested. Because that would be awkward.

Notice in fact that there is often a strict preference for less information. And that’s what makes the design of a matching mechanism complicated.  Because in order to find matches (i.e. discover and reveal mutual interest) you must commit to reveal the good news. In other words, if you and your friend both inform the experimenters that you are interested and that you want the other to know that, then in order to capitalize on the opportunity the information must be revealed.

But any mechanism which reveals the good news unavoidably reveals some bad news precisely when the good news is not forthcoming. If you are interested and you want to know when she is interested and you expect that whenever she is indeed interested you will get your wish, then when you don’t get your wish you find out that she is not interested.

Fortunately though there is a way to minimize the embarrassment. The following simple mechanism does pretty well. Both friends tell the mediator whether they are interested.  If, and only if, both are interested the mediator informs both that there is a mutual interest. Now when you get the bad news you know that she has learned nothing about your interest. So you are not embarrassed.

However it doesn’t completely get rid of the awkwardness. When she is not interested she knows that *if* you are interested you have learned that she is not interested. Now she doesn’t know that this state of affairs has occurred for sure. She thinks it has occurred if and only if you are interested so she thinks it has occurred with some moderate probability. So it is moderately awkward. And indeed you know that she is not interested and therefore feels moderately awkward.

The theoretical questions are these:  under what specification of preferences over higher-order beliefs over preferences is the above mechanism optimal? Is there some natural specification of those preferences in which some other mechanism does better?

Update: Ran Spiegler points me to this related paper.

Why are conditional probabilities so rarely used in court, and sometimes even prohibited?  Here’s one more good reason:  prosecution bias.

Suppose that a piece of evidence X is correlated with guilt.  The prosecutor might say, “Conditional on evidence X, the likelihood ratio for guilt versus innoncence is Y, update your priors accordingly.”  Even if the prosecutor is correct in his statistics his claim is dubious.

Because the prosecutor sees the evidence for all suspects before deciding which ones to bring to trial.  And the jurors know this.  So the fact that evidence like X exists against this defendant is already partially reflected in the fact that it was this guy they brought charges against and not someone else.

If jurors were truly Bayesian (a necessary presumption if we are to consider using probabiilties in court at all) then they would already have accounted for this and updated their priors accordingly before even learning that evidence X exists.  When they are actually told it would necessarily move their priors less than what the statistics imply, perhaps hardly at all, maybe even in the opposite direction.

Should restaurants put salt shakers on the table?  A variety of food writers weigh in on the question here.

The naive argument is that salt shakers give diners more control. They know their own tastes and can fine tune the salt to their liking. The problem with this argument is that salt shaken over prepared food is not the same as salt added to food as it is cooked.  A chef adds salt numerous times through the cooking process to different items on the plate because some need more salt than others.

So the benefit of control comes at the cost of excess uniformity in the flavor. But beyond that, there is an interesting strategic issue. When there is no salt shaker on the table the chef chooses the level of saltiness to meet some median or average diner’s taste for salt. All diners get equally salty food independent of their taste. Diners to the left of the median find their dish too salty and diners to the right wish they had a salt shaker.

A reduction in the level of saltiness benefits those just to the left of the median at the expense of those far to the right and at an optimum those costs outweigh the benefits.

But when there is a salt shaker, the chef can reduce the level of saltiness at a lower cost because those to the right can compensate (albeit imperfectly) by adding back the salt. So in fact the optimal level of salt added by a chef whose restaurant puts salt shakers on the table is lower.

So the interesting observation is that salt shakers on the table benefit diners who like less salt (and also those that like a lot of salt) at the expense of the average diner (who would otherwise be getting his salt bliss point but is now getting too little).

Roy is coming to plant flowers in Zoe’s garden. Zoe loves flowers, her utility for a garden with x flowers is

z(x) = x.

Roy plants a unit mass of seeds and the fraction of these that will bloom into flowers depends on how attentive Roy is as a gardener. Roy’s attentiveness is his type \theta. In particular when Roy’s type is \theta, absent any sabotage by Zoe, there will be \theta flowers in Zoe’s garden in Spring. Roy’s attentiveness is unknown to everyone and it is believed by all to be uniformly distributed on the unit interval.

Jane, Zoe’s neighbor, is looking for a gardener for the following Spring. Jane has high standards, she will hire Roy if and only if he is sufficiently attentive. In particular, Jane’s utility for hiring Roy when his true type is \theta is given by

j(\theta) = \theta - 2/3.

(Her utility is zero if she does not hire Roy.)

Roy tends to one and only one garden per year. Therefore Roy will continue to plant flowers in Zoe’s garden for a second year if and only if Jane does not hire him away from her.

Consequently, Zoe is contemplating sabotaging Roy’s flowers this year. If Zoe destroys a fraction 1 - \alpha of Roy’s seeds then the total number of flowers in Zoe’s garden when Spring arrives will be x = \alpha\theta. Of course sabotage is costly for Zoe because she loves flowers.

There will be no sabotage in the second year because after two years of gardening Roy goes into retirement. Therefore, if Zoe destroys 1-\alpha in the first year and Roy continues to work for Zoe in the second year, Zoe’s total payoff will be

z(\alpha\theta) + z(\theta)

whereas if Roy is hired away by Jane, then Zoe’s total payoff is just z(\alpha\theta).

This is a two-player (Zoe and Jane) extensive-form game with incomplete information. The timing is as follows. First, Roy’s type is realized. Nobody observes Roy’s type. Zoe moves first and chooses \alpha \in [0,1]. Then Spring arrives and the flowers bloom.  Jane does not observe \alpha but does observe the number of flowers in Zoe’s garden. Then Jane chooses whether or not to hire Roy away from Zoe. Then the game ends.

Describe the set of all Perfect Bayesian Equilibria.

When you over-inflate a kid’s self-esteem you achieve a short-run gain (boost in confidence) at the expense of a long-run cost (jaded kids who learn that praise is just noise.) For that reason, emphasis on managing self-esteem gets a lot of scorn.

But what is the cost of jaded kids? They learn to see through your lies. All that means is that their credence is a scarce resource that parents must manage. In a first-best world you are honest with your kids right up until the stage in their lives when a false boost of self-confidence has maximal payoff. Probably when they are taking the SAT.

Unfortunately it’s not a first-best world: even if you don’t lie to them, other people will and eventually they will learn to be appropriately skeptical. Which means that a child’s trust is an exogenously depreciating resource. It’s just a matter of time before they are relieved of it.

Given the inevitability of that process you have two alternatives. Deplete their credence yourself and choose what lies they get told in the process, or be always truthful and allow their trust to be violated by outside forces.

Doing it yourself at least gives them the admittedly transient benefit that comes from an artificial boost of self-confidence.  And the sooner the better.

In my kids’ tennis class they are getting good enough to have actual rallies.  The coach feeds them a ball and has them play out points.  Each rally is worth 1 point and they play to 10.  To stop them from trying to hit winners on the first shot and in attempt to get them to play longer rallies, the coaches tried out an interesting rule.  “The ball must cross the net four times before the point begins.  If your shot goes out before that, its 2 points for the other side.”

Amnesty –forgiving all of the current and previous violators but renewing a threat to punish future violators– always seems like a reputation fail.  If we are granting amnesty today then doesn’t that signal that we will eventually be granting amnesty again in the future?

But there is at least one environment in which a once-only amnesty is incentive compatible and effective:  when crime has bandwagon effects.  For example, suppose there’s a stash of candy in the pantry and my kids have taken to raiding it.  I catch one red-handed but I can’t punish her because she rightly points out that since everybody’s doing it she assumed we were looking the other way.  A culture of candy crime had taken hold.

An amnesty (bring me your private stash and you will be forgiven) moves us from the everyone’s a criminal because everyone’s a criminal equilibrium to the one in which nobody’s a criminal.  The latter is potentially stable if its easier to single out and punish a lone offender than one of many.

Ghutrah grip:  Maximo Rossi

Spouse A (henceforth “she”, the driver) prefers the air inside the vehicle to be a little warmer than the preferred temperature of Spouse B (“he”, the navigator, not because he is a worse driver –quite the contrary– but because he is an even better passenger.) In their regular confrontation with this dilemma they are seemingly blessed with the optional dual-zone climate control in their decked out Volvo SUV.

And indeed there is an equilibrium of the dual climate-zone game in which each spouse enjoys his/her temperature bliss point. This equilibrium is unfortunately highly unstable. Because of the exchange of heat across the thermal gradient the only way each can maintain the constant target temperature is to adjust their controllers so that the air blown out their respective vents deviates slightly from that target further in the direction of the extreme. Hers must be set somewhat warmer and his somewhat cooler.

Now from that starting point, the slightest perturbation upsets the delicate balance and can set off a dangerous chain reaction. Consider for example what happens when, due to random alterations in air flow she begins to feel a bit on the cool side of her comfort zone. Her response is to adjust her controller one peg toward the red. This restores her comfort level but very soon as a result he will begin to feel the discomfort of unexpectedly hot and dry air blowing into his zone and he will react by moving his controller one peg toward the blue.

This is not likely to end well.

I miss you like this title misses the point

Star Michigan guard Trey Burke collected two personal fouls in the early minutes of the National Championship game against Louisville and he was promptly benched and sat out most of the remaining first half.  The announcers didn’t bother to say why because its common wisdom that you don’t want your best players fouling out early.

But the common wisdom requires some scrutiny because on its surface it actually looks absurd.  You fear your best player fouling out because then his playing time might be limited.  So in response you guarantee his playing time will be limited by benching him.  Jonathon Weinstein once made this point.

But just because basketball commentators, and probably even basketball coaches, don’t properly understand the rationale for the strategy doesn’t mean the strategy is unsound.  In fact it follows from a very basic strategic idea:  information is valuable.

Suppose the other team is scoring points at some random rate.  If they are lucky they score a lot and if they are less lucky they score fewer.  If the other team scores a lot your team should start shooting threes and go for short possessions to catch up.  If the other team scores fewer you should go for safer shots and run down the clock. But you only know which of these you should do at the end of the game.  If your best players are on the bench at that time you cannot capitalize on this information.

NEWRY, Maine — A Finnish couple has added to their victories by taking first place in the North American Wife Carrying Championship at Maine’s Sunday River ski resort.

Taisto Miettinen and Kristina Haapanen traveled from Helsinki, Finland – where they won the World Wife Carrying Championship – for Saturday’s contest. The Sun Journal (bit.ly/Q30QWq) reports that the couple finished with a time of 52.58 seconds on a course that includes hurdles, sand traps and a water hole.

The winners receive the woman’s weight in beer and five times her weight in cash.

The model:  At date 0 each of N husbands decides how fat his wife should be.  At date 1 they run a wife-carrying race, where the husband’s speed is given by some function f(s,w) where s is the strength of the husband, and w is the weight of his wife.  The function f is increasing in its first argument and decreasing in the second. The winner gets K times his wife’s weight in cash and beer.  Questions

  1. If the husbands are symmetric what is the equilibrium distribution of wife weights?
  2. Under what conditions on f does a stronger husband have a fatter wife?
  3. Derive the comparative statics with respect to K.

We have coordinated on April 1 as the date where everyone gets to indulge their latent desire to say something false and hope that it gets believed.  The problem of course is that as a result nothing you say on April 1 is believed.  Credibility has a public good aspect to it and the social optimum would conserve enough of it so that at least some of us could feed our hilarious public deception jones.

(You might argue otherwise.  By reducing credibility across the board we set the stage for the truly exceptional liars to show their stuff fooling people even though everyone was expecting them to do exactly that.)

It soon becomes tempting to start making up stuff on the day before April 1, and then eventually sooner.  Whether, how, and why this unfolds depends on just what are the basic forces at work here, a question about which we can only theorize.  We know by revealed preference that people want to say made-up stuff.  But its more than that, you want people to believe it, pass it on and then get called out for believing a fake story.  And the best kind of April Fools story is the kind that is ex post so obviously fake that the foolee looks especially gullible.

All of these suggest that April 1 is the least ideal day for April Fools.  So then we can ask why are April Fools pranks perpetrated on April 1 and not some other day? Is it because April 1 gives you reputational cover for reporting bogus stories?  It sounds like a good theory, and if it were true we would not have to worry about March 31 Fools and the eventual Year-Round Fools. But it doesn’t seem to survive closer scrutiny.  If my reputation for legitimate reporting is safe for that one day it must be because nobody expects me to do legitimate reporting that day.  But then April 1 is the last day I would want to be dropping my April Fools.

Instead I think its something more subtle.  The perfect April Fools prank works by first roping in the reader but then slowly revealing clues that remind her that its April Fools and you have been had.  April 1 plays a crucial role in this development. The Reveal would not come with the same impact on any July 19.  Indeed the best April Fools pranks tell you the date somewhere along the way.  Its how you say to the reader “look at you, you forgot about April Fools, and I got you” without actually saying that.  And it provides ammunition when he blindly passes the story on and his friends can say “check the date.”

So April Foolers need the following epistemic infrastructure for their pranks:  1) The possibility of surprise, 2) The expectation of surprise.  And clearly it cannot be an equilibrium to have both of these if the source of #2 is to be contained in a single date April 1.  Once there is too much of #2, there’s precious too little of #1.

And that’s when the pre-emption begins.  By publishing your April Fool on March 31, you get more of 1), and still pretty much the same amount of 2).  Until of course everybody starts doing that and the unraveling continues.  (We are already getting close.)

There is one bright side to all of this.  When everybody has come to expect that whatever you say on April 1 is false, you may indeed no longer be able to fool people into believing something you made up.  But the flipside is a welcome opportunity for many wishing to come clean at minimum cost.  You now have a day when you can tell the truth and have nobody believe you.

Dear Northwestern Economics community. I was among the first to submit my bracket and I have already chosen all 16 teams seeded #1 through #4 to be eliminated in the first round of the NCAA tournament. In case you don’t believe me:

When I wear my Lululemons you can see all the way into my soul

Now that i got that out of the way, consider the following complete information strategic-form game. Someone will throw a biased coin which comes up heads with probability 5/8. Two people simultaneously make guesses. A pot of money will be divided equally among those who correctly guessed how the coin would land. (Somebody else gets the money if both guess incorrectly.)

In a symmetric equilibrium of this game the two players will randomize their guesses in such a way that each earns the same expected payoff. But now suppose that player 1 can publicly announce his guess before player 2 moves. Player 1 will choose heads and player 2’s best reply is to choose tails. By making this announcement, player 1 has increased his payoff to a 5/8 chance of winning the pot of money.

This principle applies to just about any variety of bracket-picking game, hence my announcement. In fact in the psychotic version we play in our department, the twisted-brain child of Scott Ogawa, each matchup in the bracket is worth 1000 points to be divided among all who correctly guess the winner, and the overall winner is the one with the most points. Now that all of my colleagues know that the upsets enumerated above have already been taken by me their best responses are to pick the favorites and sure they will be correct with high probability on each, but they will split the 1000 points with everyone else and I will get the full 1000 on the inevitable one or two upsets that will come from that group.

Remember how Mr. Miyagi taught The Karate Kid how to fight?  Wax on/Wax off. Paint the fence. Don’t forget to breathe. A coach is the coach because he knows what the student needs to do to advance. A big problem for coaches is that the most precocious students also (naturally) think they know what they need to learn.

If Mr. Miyagi told Daniel that he needed endless repetition of certain specific hand movements to learn karate, Daniel would have rebelled and demanded to learn more and advance more quickly. Mr. Miyagi used ambiguity to evade conflict.

An artist with natural gift for expression needs to learn convention. But she may disagree with the teacher about how much time should be spent learning convention. If the teacher simply gives her exercises to do without explanation her decision to comply will be on the basis of an overall judgment of whether this teacher, on average, knows best. To instead say “You must learn conventions, here are some exercises for that” runs the risk that the student moderates the exercises in line with her own judgment about the importance of convention.

Pope Floats

this a screenshot, from a few minutes ago (ed:  last week), of bwin.com. the bets here are on goals in regular time of the barcelona-milan to be played in a little while. barcelona lost 2-0 in milan so barcelona needs at least 2 goals to force extra-time/penalty kicks. this is for the champions league.

as you can see from the screenshot barcelona winning 1-0 pays 10, 2-0 pays 7.5, 3-0 pays 8.75, while 4-0 pays 12.

what can we learn from this non-monotonicity? gamblers anticipate that barcelona’s extra incentives to score the 2-0 goal make it a more likely event than the 1-0 result (even though they have to score an extra goal!). once they have scored the 2-0, those extra incentives vanish so we are back to the intuition that a result with more goals is less likely.

How could this effect play out in real time?  Here’s a model.  It takes effort to increase the probability of scoring a goal.  An immediate implication is that if the score is 0-0 with little time left, Barcelona will stop spending effort and the game will end 0-0.  Too late in the game and it becomes so unlikely they can score two goals that the effort cost isn’t worth it.  But if the score is 1-0 they will continue to spend effort beyond that point.  So there is some interval near the end of the game where the conditional probability of scoring a goal is positive if the score is 1-0 but close to zero if the score is 0-0.

I would be interested in seeing some numbers calibrated to generated the betting odds above.  We need three parameters.  The first two are the probability of scoring a goal in a given minute of game time when Barcelona spends effort, and when it does not.  The second is Barcelona’s rate of substitution between effort and win-probability.  This could be expressed as follows.  Over the course of a minute of play what is the minimum increase in win probability that would give Barcelona sufficient incentive to spend effort. These three parameters will determine when Barcelona stops spending effort in the 1-0 versus 0-0 scenarios and given this will then determine the probabilities of 1-0, 2-0, 3-0 etc. scores.

Arthur Robson wrote this on Facebook:

If am peacefully working out, and someone else arrives in the gym, they usually grab the TV remote to bathe in the inane chatter of preternaturally perky news shows. What if I were to arrive while they were watching TV and switched it off?

Which is a good point but still I think that a case can be made that it is morally allowed to turn on the TV but not to turn it off.

If you walk into the gym and the TV is on, that fact is a strong signal that somebody is watching it and would be harmed if you turned it off.  On the other hand when the TV is off you have much less information about what people are paying attention to.  You only know that nobody turned the TV on.  This is consistent with everybody being indifferent to the TV being on and off.

The point being that a utilitarian calculation based only on the signal of whether the TV is on or off will always make it strictly more permissible to turn the TV on than to turn it off.

But note that the inference is a function of the moral code. And if people are following the turn-on-but-not-off code then the TV will be on even if nobody is watching it.

So what we need is an equilibrium code:  a code that works even in equilbirium when it is expected to be followed by others.

There are two actions, A and B, and there are two observable types of people L and R.  Everybody is the same in the following sense:  for any single individual either A or B is the optimal action but which one it is depends on an unknown state of the world.

But in another sense people are heterogeneous.  It is common knowledge that in the state of the world where A is best for people of type L then B is best for people of type R.  And in the other state its the other way around.  Each person observes a private signal that contains information about the state of the world.

Acting in isolation everybody would do exactly the same thing:  pick the action that is best according to their belief (based on the private signal) about the state of the world.  But now embed this in a model of social learning.  People make their choices in sequence and each observes the choices made by people who went before.

Standard herding logic tells us that L’s and R’s will polarize and choose the opposite action even if they get it completely wrong (with L’s choosing the action that is best for R’s and R’s choosing the action that is best for L’s)

(A reminder of how that works.  Say that an L moves first.  He chooses the action that looks the best to him say A.  Now suppose the next guy is an R and by chance action B looks best to him.  The third guy is going to look at the previous two and infer from their choices that there is strong information that the true state is such that A is good for L’s and B is good for R’s. This information can be so strong that it swamps his one little private signal and he follows the herd:  choosing A if he is L or B if he is R.  This perpetuates itself with all subsequent decision makers.)

In effect the L’s choose A just because the R’s are choosing B and vice versa.

From Catherine Rampell:

In several computer science courses at Johns Hopkins University, the grading curve was set by giving the highest score on the final an A, and then adjusting all lower scores accordingly. The students determined that if they collectively boycotted, then the highest score would be a zero, and so everyone would get an A. Amazingly, the students pulled it off:

Her analysis of the problem would be the starting point for a nice introductory example in a game theory class (although it appears what she is saying is that taking the test is weakly dominant, but I doubt that is true if there is a positive opportunity cost of time.)

Kava tembel tumble:  Arthur Robson

Another good one from Scott Ogawa.  It’s the Creampuff Dilemma.  A college football coach has to set its pre-season non-conference schedule, thinking ahead to the end-of-season polling that decides Bowl Bids.  A schedule stocked with creampuffs means lots of easy wins.  But holding fixed the number of wins, a tough schedule will bolster your ranking.

Here’s Scott’s model.  Each coach picks a number p between 0 and 1.  He is successful (s=1) with probability p and unsuccessful (s=0) with probability 1-p.  These probabilities are independent across players.  (Think of these as the top teams in separate conferences.  They will not be playing against each other.)

Highest s-p wins.

The clocks in Grand Central are off by 1 minute:

But Grand Central, for years now, has relied on a system meant to mitigate, if not prevent, all the crazy. It is this: The times displayed on Grand Central’s departure boards are wrong — by a full minute. This is permanent. It is also purposeful.

The idea is that passengers rushing to catch trains they’re about to miss can actually be dangerous — to themselves, and to each other. So conductors will pull out of the station exactly one minute after their trains’ posted departure times. That minute of extra time won’t be enough to disconcert passengers too much when they compare it to their own watches or smartphones … but it is enough, the thinking goes, to buy late-running train-catchers just that liiiiiitle bit of extra time that will make them calm down a bit. Fast clocks make for slower passengers. “Instead of yelling for customers to hurry up,” the Epoch Times notes, “the conductors instead tell everyone to slow down.”

Not everyone is going to equilibrate, just the regulars.  But that’s exactly what you want.  If you set the clock right then everyone is rushing to the train just when its departing.  If you set the clock 1 minute off and everyone equilibrates then still everyone rushes to the train when its departing.

The system works because some of the people adjust to the clock and others don’t.  So the rush is spread over two minutes rather than one.

According to this video by Tim Hartford, this includes: Designing wargames for Kissinger et al., helping Kubrick with Dr Strangelove, suggesting the red telephone be installed between USA and Soviet Union and trying but failing to dissuade a bombing campaign in Vietnam. On the intellectual plane (as well as game theory), decades ahead of his time in thinking about behavioral economics (because he was trying to give up smoking), doing the first agent-based model (of discrimination) and thinking about climate change. Near-fatal flaw for Nobel Committee: Not enough math.

Great for teaching. Part on mutually assured destruction vs common interest games could easily be folded into discussion of Nash vs subgame perfect equilibrium and hence credibility.