Sandeep and I are writing a paper on torture.  We are trying to understand the mechanics and effectiveness of torture viewed purely as a mechanism for extracting information from the unwilling.  A major theme we are finding is that torture is complicated by numerous commitment problems.  We have blogged about these before.  Here is Sandeep’s first post on torture which got this whole project started.

A big problem is that torture takes time and when the victim has resisted repeated torture it becomes more and more likely that he actually has no information to give.  At this point the torturer has a hard time credibly commiting to continue the torture because in all likelihood he is torturing an innocent victim.  This feeds back into the early stages of the torture because it increases the temptation for the truly informed victim to resist torture and pretend to be uninformed.

In light of this it is possible to say something about the benefits of adopting more and more severe forms of torture, waterboarding say.   A naive presumption is that a technology which delivers suffering at a faster pace would circumvent the problem because it makes it harder to resist temptation for long enough.

But this logic is backwards.  Indeed, if it were true that more severe torture induced the informed to reveal their information early, then this would only hasten the time at which the torture ceases because the torturer becomes convinced that his heretofore silent victim is in fact innocent.  So credible torture requires that those who resist the now more severe torture must find compensation in the form of less information revealed in the future.  In the end the informed victim is no worse off and this means that the torturer is no better off.

Once you account for that what you are left with is that there is more suffering inflicted on the uninformed who has no alternative but to resist.  And this only makes it more difficult to continue torturing once the victim has demonstrated he is innocent. That is, the original commitment problem is only made worse.

These two conditions are sufficient:

(1) When the race is close or

(2) When the prize is big.

Federer can afford to relax in a match with small stakes or when he is close to the winning point.  But he should work hard at Wimdledon and if the match is tied.  The same principles apply in elections.  In the Massachusetts special election, it has been known for months that the second condition is satisfied.  Because, without the Massachusetts Senate seat, the 60 vote barrier needed to fight the filibuster is gone.  The White House, the Democratic Party etc should have been focused on the Massachusetts race on this ground alone.  Belatedly, it was discovered that condition (1) also obtained.  But even you didn’t know that, you did know that the coalition you had to hold together to get stuff through the Senate was pretty fragile and one crack was enough to send the whole thing flying.

The Republican Party recognized that the second condition was key, worked like crazy and played the optimal strategy.  It’s common sense but only one party seemed to get it.

Ingenious new support for that view:

The mighty insect colonies of ants, termites and bees have been described as superorganisms. Through the concerted action of many bodies working towards a common goal, they can achieve great feats of architecture, agriculture and warfare that individual insects cannot.

That’s more than just an evocative metaphor. Chen Hou from Arizona State University has found that the same mathematical principles govern the lives of insect colonies and individual animals. You could predict how quickly an individual insect grows or burn food, how much effort it puts into reproduction and how long it lives by plugging its body weight into a simple formula.  That same formula works for insect colonies too, if you treat their members as a collective whole.

And this is not just an accounting trick.  If you take a “colony” of, say, 100 people and and measure how much energy their bodies use it would be 100 times the energy that a single body uses (duh.)  But a single animal that weighs 100 times as much as a human uses only 100^(3/4) ≈ 32 times as much energy as a single human.  There are economies of scale within a single organism but not across.

Except with ant colonies.  The mass to energy ratio of the colony as a whole follows the same law that governs indivduals of non-colony animals.  Via Not Exactly Rocket Science.

It’s been blog fodder the past week.

In other words, to pull off a successful boast, you need it to be appropriate to the conversation. If your friend, colleague, or date raises the topic, you can go ahead and pull a relevant boast in safety. Alternatively, if you’re forced to turn the conversation onto the required topic then you must succeed in provoking a question from your conversation partner. If there’s no question and you raised the topic then any boast you make will leave you looking like a big-head.

It makes perfect sense.  First of all, purely in terms of how much I impress you, an unprovoked boast is almost completely ineffective.  Because everybody in the world has something to boast about.  If I get to pick the topic then I will pick that one.  If you pick the topic or ask the question then the odds you serve me a boasting opportunity are long unless I am truly impressive on many dimensions.

And it follows from that why you think I am a jerk for blowing my own horn.  I reveal either that I don’t understand the logic and I am just trying to impress you or I think that you don’t understand it and I can fool you into being impressed by me.

Via kottke.org

And this is what I sometimes worry about: do I put them back on top of the stack? Do I put the bowls back in the empty front spot on the shelf? Because if I do that, then guess which dishes are going to get reached for the next time? That’s right, the same ones.

I think about this every time I put our dishes back in the cupboard. I assumed it was just me and that I was crazy.

The worry is that the dishes in regular rotation will depreciate faster than the ones that get stuck at the bottom of the stack.  I would say that if this was a potential problem then you have more dishes than you need.  Do you go a month without a single day in which you work your way to the bottom?  Then you can safely get rid of a number of dishes equal to the number that never gets used.

Maybe you have occasional large gatherings and the extra dishes are there just for those occasions.  Then it would seem that the question turns on a comparison of the time it takes before the difference in wear is noticeable and the frequency of these gatherings.  Still I would say the trade-off is non-existent.  First, unless you are buying really cheap dishes, the time span we are talking about here is measured in years not months.  You can always rent dishes for your party.

Second, we are talking about just a few extra dishes.  Forcing them into the rotation will indeed ensure uniformity.  Now they will all be dented and scratched.  Again, renting is the remedy if your concern is the impression you make on your guests.

When you search google you are presented with two kinds of links.  Most of the links come from google’s webcrawlers and they are presented in an order that reflects google’s PageRank algorithm’s assessment of their likely relevance.  Then there are the sponsored links.  These are highlighted at the top of the main listing and also lined up on the right side of the page.

Sponsored links are paid advertisements.  They are sold using an auction that determines which advertisers will have their links displayed and in what order.  While the broad rules behind this auction are public, google handicaps the auction by adjusting bids submitted by advertisers according to what google calls Quality Score.  (Yahoo does something similar.)

If your experience with sponsored links is similar to mine you might start to wonder whether Quality Score actually has the effect of favoring lower quality links.  Renato Gomes, in his job market paper explains why this indeed might be a feature of the optimal keyword auction.

The idea is based on the well-known principle of handicaps for weak bidders in auctions.  Let’s say google is auctioning links for the keyword “books” and the bidders are Amazon.com plus a bunch of fringe sites.  If Amazon is willing to bid a lot for the ad but the others are willing to bid just a little, an auction with a level playing-field would allow Amazon to win at a low price.  In these cases google can raise its auction revenues by giving a handicap to the little guys.  Effectively google subsidizes their bids making them stronger competitors and thereby forcing Amazon to bid higher.

Of course its rare that the stronger bidder is so easy to identify and anyway the whole auction is run instantaneously by software.  So how would google implement this idea in practice?  Google collects data on how often users click through the (non-sponsored) links it provides to searchers.  This gives google very good information about how much each web site benefits from link-generated traffic.  That’s a pretty good, albeit imperfect, measure of an advertiser’s willingness to pay for sponsored links.  And that’s all google would need to distinguish the strong bidders from the weak bidders in a keyword auction.

And when you put that all together you see that the weak guys will be exactly those websites that few people click through to.  The useless links.  The revenue-maximizing sponsored link auction favors the useless links and as a consequence they win the auction far more frequently than they would if the playing-field were level.

(To be perfectly clear, nobody outside of google knows exactly how Quality Score is actually calculated, so nobody knows for sure if google is intentionally doing this.  The analysis just shows that these handicaps are a key part of a profit-maximizing auction.)

Renato’s job market paper derives a number of other interesting properties of an optimal auction in a two-sided platform.  (Web search is a two-sided platform because the two sides of the market, users and advertisers, communicate through google’s platform.)  For example, his theory explains why advertisers pay to advertise but users don’t pay to search.  Indeed google subsidizes users by giving them all kinds of free stuff in order to thicken the market and extract more revenues from advertisers.  On the other hand, dating sites, and some job-matching sites charge both sides of the market and Renato derives the conditions that determine which of these pricing structures is optimal.

  1. Interview with an anonymous Facebook employee.  (via kottke)
  2. Your pal, John Kricfalusi
  3. Haunted junk.

My attempt at describing a paper in words.  The paper is by Anirban Mitra and Debraj Ray and I am going to offer a simplified version of it.

Suppose an aggressor faces a victim and decides whether to attack the victim or not.  Each has wealth which can be stolen at some cost. Each  belongs to some group, e.g Hindu or Muslim. The aggressor’s group can decide how to much to invest in a “conflict infrastructure” that reduces the cost of an attack to a member of the aggressor group.  How do the number of attacks change as a function of aggressor and victim incomes?

First, suppose victim incomes increase keeping aggressor incomes fixed.  There are two effects.  Keeping investment in conflict infrastructure fixed, certainly the aggressor group has a greater incentive to attack and this effect increases attacks.  However, in principle, if investment in conflict infrastructure goes down significantly, this might reduce the number of attacks.  As the benefits of attacking have gone up and aggressor incomes are kept fixed, this effect is never going to be large enough to cancel out the first one.   The first conclusion is that as victim incomes go up so do the number of attacks.

Second, suppose aggressor incomes go down keeping victim incomes fixed.  There are two effects.  The first effect is the same:  a fall in aggressor incomes increases their incentive to attack the victims, in the same way that a rise in victim incomes increases the incentive to attack, keeping investment in conflict infrastructure fixed.  But as aggressor income has gone down, this can increase the costs of investing in conflict infrastructure.  If this second effect is large enough, it can cancel out the first effect.  The second conclusion is that the impact of a change in aggressor incomes on the number of attacks is ambiguous.

Then the authors use the model to offer an interpretation of Hindu-Muslim violence in India.  They have data on violence and also data on household expenditures by ethnic group.  They show that a change in Hindu expenditures has an insignificant effect on ethnic violence but that an increase in Muslim incomes has a large, positive and significant effect on violence.

Using the model to interpret the data, we would conclude that Hindus are the aggressor group and Muslims are the victim group. The combination of the theory and the data is necessary to offer this interpretation.  I like this aspect of the paper as well as the quite surprising elicitation of the identity of the aggressor group vs. the victim group.  Why is there this asymmetry?  The authors offer some interesting speculations based on Indian Partition.  I highly recommend the paper: the theory and the empirical analysis and simple enough that a theorist or an empiricist can understand the whole paper.  The conclusions are surprising and the methodological approach is attractive.

This lecture brings together everything built up to this point.  We are going to develop an intuition for why competitive markets are efficient using a model of profit maximizing sellers who compete in an auction market by setting reserve prices.  In the previous lecture we saw how the profit maximization motive leads a seller with market power to choose an inefficient selling mechanism.  This came in the form of a reserve price above cost.  Here we begin by getting some intuition why competition should reduce the incentive to distort price in this way.

(This is probably the weak link in the whole class.  I do not have a good idea of how to teach this and in fact I am not sure I understand it so well myself.  This is the first place to work on improving the class next time.  Any suggestions would be appreciated.)

Finally, we jump to a model with a large number of buyers and sellers all competing in a simultaneous ascending double auction.  With so much competition, if sellers set reserve prices above their costs there will be

  • no sellers who are doing better than if they just set the reserve price equal to cost
  • a positive mass of sellers who would do strictly better by reducing their reserve price to equal their cost

In that sense it is a dominant strategy for all sellers to set reserve price equal to their cost.  This equates the “supply” curve with the cost curve and produces the utilitarian allocation.  Here are the notes.

We saw The Fantastic Mr Fox a few weeks ago.  It was a thoroughly entertaining movie and I highly recommend it.  But this is not a movie review.  Instead I am thinking about movie previews and why we all subject ourselves to sitting through 10-plus minutes of previews.

The movie is scheduled to start at the top of the hour, but we all know that what really starts at the top of the hour are the previews and they will last around 10 minutes at least.  Why don’t we all save ourselves 10 minutes of time and show up 10 minutes late?

Maybe you like to watch previews but I don’t and in any case I can always watch them online if I really want to.  I will assume that most people would prefer to see fewer previews than they do.

One answer is that the theater will optimally randomize the length of previews so that we cannot predict precisely the true starting time of the movie.  To guarantee that we don’t miss any of the film we will have to take the chance of seeing some previews.  But my guess is that this doesn’t go very far as an explanation and anyway the variation in preview lenghts is probably small.

In fact, even if the theater publicized the true start time we would still come early.  The reason is that we are playing an all-pay auction bidding with our time for the best seats in the theater.  Each of us decides at home how early to arrive trading off the cost of our time versus the probability of getting stuck in the front row.  The “winner” of the auction is the person who arrives earliest, the prize is the best seat in the theater, and your bid is how early to arrive.  It is “all pay” because even the loser pays his bid (if you come early but not early enough you get a bad seat and waste your time.)

In an all pay auction bidders have to randomize their bids.  Because if you knew how everyone else was bidding you would arrive just before them and win.  But then they would want to come earlier too, etc.  The randomizations are calibrated so that you cannot know for sure when to arrive if you want to get a good seat and the tradeoffs between coming earlier and later are exactly balanced.

As a result most people arrive early, sit and wait.  Now the previews come in.  Since we are all going to be there anyway, the theater might as well show us previews.  Indeed, even people like me would rather watch previews than sit in an empty theater, so the theater is doing us a favor.

And this even explains why theater tickets are always general admission.  Let’s compare the alternative.  The theater knows we are “buying” our seats with our time.  The theater could try to monetize that by charging higher prices for better seats.  But it’s a basic principle of advertising that the amount we are willing to pay to avoid being advertised at is smaller than the amount advertisers are willing to pay to advertise to us.  (That is why pay TV is practically non-existent.)  So there is less money to be made selling us preferred seats than having us pay with our time and eyeballs.

I’m attending a conference in Madrid so I will either describe either papers in words or a tourist guide in equations.

Actually, I can’t translate either papers or anything else into equations so I will stick to words.  As the conference just started but we arrived a few days ago, I will start with the tourism.  When we arrived on Saturday morning, we decided to try to stay awake and adjust to the new time zone.  So we sleepily rolled into a taxi and made our way to Plaza Santa Ana.  The kids were vivacious before we climbed into the taxi and the older one was sleepy ten minutes later when we emerged.  We went into the closest open place, Cerveceria Santa Ana.  Plaza Santa Ana has many bars, one of which was frequented by Hemingway but it wasn’t the one we wandered into.  The bar is modest, it has part where you stand and enjoy lower prices and one where you can sit.  Modest or not, the tapas were great.  And it was quiet enough that a child can sleep.

But the real treat was Matritum that evening.  It does innovative takes on well-known tapas and then wacky creative ones (though not Adria level wacky!).  If you’re going to do traditional tapas at all, you have to make patatas bravas and so one can rank restaurants in terms of the quality of this staple:  Matritum is excellent on this scale  Boiled new potatoes with mildly spicy tomato sauce and a mayonnaise with delicious mystery spices.  Other things we tried: chicken and ginger samosas, deep fried pancakes with tiny shrimp, toasted bread with tomato and jamon iberico and chocolate brownie with violet ice cream and chocolate sauce.  The only weak point was a potato gratin with five cheeses which was a bit generic.  Oh: the wine list in excellent.  Imports of Spanish wine into the Chicago area at least can be overoaked and fruity.  At Matritum I had a delicate and floral Monastrell.  We’re going back before we move to Barcelona

Musicians and academics are promiscuous collaborators. They flit from partnership to partnership sometimes for one-off gigs, sometimes for ongoing stints. In academia, regardless of the longevity of the group, the individual author is always the atomic unit. Co-authorships are identified simply with the names of the authors. Whereas musicians eventually form bands.

Bands have identities separate from the individuals in the bands. The name of the band stores that identity. It also solves a problem we face in academia of how to order the names of the contributors. You don’t. (There is evidence that the lexical ordering of names is good for Andersons and bad for Zames.) We should form bands too.

The idea of a band is important enough that sometimes even solo musicians incorporate themselves as bands. Roger Myerson is the Nine Inch Nails of game theory.

Bands work in the studio (writing papers) and then tour (giving seminars.) Musicians have two typical ways of organizing these. Jazz and pop bands create and perform as a group. Classical music is usually performed by specialists rather than the composer herself.

Our bands do something in between which is hard to understand when you think of it this way. We compose as a band but then perform as individuals. That’s weird because you would think that either you want to hear the composer do the performing or a performance specialist. If it is always the composer then it must be because the composer has a special insight into the performance. But then why not all of them? We should tour as bands some times. And we should also reward performance specialists who perform others’ work.

I want to name my bands. I want my next co-authored paper to be “by (insert name of band here) ” Sandeep, what do you say? Our torture paper will be “by Cheap Talk.” I look forward to making petulant demands and trashing hotel rooms.

In that other post, I was being serious.  But here, just for fun, let’s name some of the great economics bands.  I will start.

  1. Fudenberg and Levine:  The Gossamer Anvil, an early 70s jam band.
  2. Gul and Pesendorfer:  Mixtürhëad.
  3. Morris and Shin: Eskalator, prog rock.

When I read this (via Ryan Sager) about the classic good cop/bad cop negotiating ploy:

BUT there was also a twist we did not address in our research, and in fact, would have been tough to do as we were studying people in “the wilds” of organizational life.  Their research shows that starting with a good cop and then using a bad cop was not effective, that the method only was effective for negotiating teams when the bad cop went first and the good cop followed.   So, this may mean it really should be called “The Bad Cop, Good Cop Technique.”

it brought to mind some famous studies of Daniel Kahneman on perception and the timing of pleasure and pain.  There is one you will never forget once you hear about it.  Proctologists randomly varied the way in which they administered a standard colonoscopy procedure.  Some patients received the usual treatment in which a camera was plunged into their rectum and then fished around for a few minutes.  The fishing around is extremely uncomfortable.

An experimental group received a treatment which was identical except that at the very end <you can do better than that Beavis> the camera was left in situ <ok that’s pretty good> for an extra 20 seconds or so.  The subjects were interviewed during the procedure and asked to report their level of pain, and after the procedure to report on the procedure overall.  As intended, those in the experimental group reported that the final 20 seconds were less painful than the main part of the procedure.  But the headline finding of the experiment was that those subjects receiving the longer treatment found the procedure overall to be more tolerable than the control group.

Regular readers of this blog will know that I consider that a good thing.

The financial crisis is motivating a search for new models of asset markets and their interaction with the real economy.  It seems obvious that, for example the housing bubble can only be explained by a model in which asset prices are bid up by the activity of highly optimistic investors or speculators.  Models which build in these divergent beliefs (and not just differences in information) are, perhaps very surprisingly to outsiders, only recently coming to mainstream economic theory.

Alp Simsek asks whether the presence of optimistic traders can inflate the price of assets, say housing prices.  It seems obvious, but remember that investment in housing is leveraged using collateralized loans where the house itself is the collateral.  If the optimists are borrowing from the “realists” to buy houses at overinflated prices, and they are offering up the house as collateral, then surely the realists aren’t willing to lend?

Alp shows that this logic sometimes holds, but not always.  And he formalizes a precise way of measuring optimism which determines whether the presence of optimists will inflate asset prices, or alternatively their optimism will be filtered due to realists’ witholding of credit.

Suppose that you are a realist and you are making a loan to me to purchase a house.  A year later we will see whether housing prices have gone up or down.  If they go up, I will pay off the loan and realize a profit.  If they go down I will default on the loan.  A key idea is to understand that the loan effectively makes us partners in the purchase of the house.  I own it on the upside (and I pay you back your loan) and you own it on the downside.  We pay for the house together too: you contribute the loan amount and I contribute the down pament.

The equilibrium price of the house will be determined by how much we, as partners, are willing to pay.  I am an optimist and I would like to pay a lot for it, but I am financially constrained so my contribution to the total price is some fixed amount, my down payment.  Thus, our total willingness to pay is determined by how much you are willing to pay to enter this partnership.

Now we can see how my optimism plays a role.  Suppose I am more optimistic than you in the sense that I think there is a lower probability of default than you.  It turns out this doesn’t make our willingness to pay any higher than it would be if I were a realist just like you.  That’s because you own the house in the event of default so it’s the probability that you assign to default that enters into our total value, not the probability that I assign.  It’s true that I assign a higher probability to the good event that the price goes up, but I am already putting all of my cash into the partnership.  I can’t do anything more to leverage this form of optimism.

But suppose instead that the way in which I am more optimistic than you is slightly different.  We both assign the same probability to default, i.e. the event that the price falls.  Where we differ is in terms of our beliefs conditional on the price going up.  In particular I think that conditional on the upside, the expected price increase is higher than you think it is.  Now we have a new way to leverage our partnership.  Since I expect to have a higher upside, I am prepared to offer you a higher payment in the event of that upside.  (That is, I am willing to pay back a larger loan amount.)  And the promise of that higher payment on the upside coupled with the same old house on the downside makes this a strictly more attractive partnership for you and you are willing to pay more to enter it.  (That is, you are willing to loan more to me.)

Indeed these collateralized loans seem to be the ideal contracts for us to make the most of our differences in beliefs.  And once we see how that works, it is easy to go from there to a theory of a dynamic housing bubble.  Tomorrow there might be optimistic investors who will partner will creditors to bid up housing prices.  Today, you and I might have differences in beliefs about the probability that those optimistic investors might materialize.  If I am more optimistic than you about it, you and I can enter into a partnership which leverages our different beliefs about tomorrow’s differences in beliefs, etc.

There is an important thing to keep in mind when considering models with heterogenous beliefs.  We don’t have a good handle on welfare concepts in these models.  For example, in Simsek’s model the efficient allocation is to give the asset to the optimists.  Indeed, the financial friction is only an impediment to achieving an efficient allocation.  A planner, faced with the same constraint, would not do anything different than the market.  If we apply standard welfare notions like this, then these models are not a good framework for discussing financial reform.

  1. The applied physics of pizza tossing.
  2. Video of Glenn Gould playing the Goldberg Variations.  Starts at around 6:30 and goes on for six clips.  Really good.
  3. Charles Mingus’ cat toilet training program.
  4. The sexual battles of ducks.
  5. On behalf of the two spaces between sentences I would like to say I think they are beautiful.And that it must be lonely to be one space.And I know this is wrong.

Government organizations often compete not cooperate.  They compete for funding from the central government and if say the C.I.A. succeeds in some task and the N.C.T.C. does not, money, status, access etc. might move naturally towards the former from the latter.  If the N.C.T.C. helps the C.I.A. catch a terrorist, ironically, their own hard work is punished.  On the other hand, competition helps to give the bureaucracies the incentive to work hard.  That is, the positive effect that must be counterbalanced against the negative effect on incentives to cooperate.  What is the optimal incentive scheme?

This seems like a pretty important question and someone has studied an important part of it.  The classic paper is Hideshi Itoh’s Incentives to help in Multi-Agent Situations.

Suppose the marginal cost of helping is zero at zero effort of helping.  Then, if one agent’s help reduces the other’s marginal cost of effort at his main task, it is optimal to incentivize teamwork.  How do you do that?  One agent has to be paid when the other succeeds.  The assumptions that efforts are complements and that the marginal cost of help is zero at zero do not seem to be a big stretch in the present circumstances.  The benefits of greater competition, lower resource costs, must be traded off against the costs, less cooperation and hence more chance of a successful terrorist attack if “dots are not connected” across organizations.

Itoh also shows that if the marginal cost of helping is positive at zero help, the optimal scheme either involves total specialization or,  more surprisingly, substantial teamwork.  This is because giving agents the incentive to help each other just a little is very costly, given the cost condition.  So, if you are going to incentivize teamwork at all,  it is optimal incentivize large chunks of it.   If the benefits of catching terrorists is large, this logic also pushes the optimal scheme towards teamwork.

With much information classified, it is impossible to know how much intra-bureaucracy competition contributed to intelligence failure.  But whether it did not or not, it is worth ensuring that good mechanisms for cooperation are in place.

Out now is a collection of academic essays on The Big Lebowski.

Where cult films go, academics will follow. New in bookstores, and already in its second printing, is “The Year’s Work in Lebowski Studies,” an essay collection edited by Edward P. Comentale and Aaron Jaffe (Indiana University Press, $24.95). The book is, like the Dude himself, a little rough around the edges. But it’s worth an end-of-the-year holiday pop-in. Ideally you’d read it with a White Russian — the Dude’s cocktail of choice — in hand.

Chullo chuck:  gappy3000.  And here is a Big Lebowski random quote generator.

Here’s a purely self-interested rationale for affirmative action in hiring.  An organization repeatedly considers candidates for employment.  A candidate is either good or just average and there are minority and non-minority candidates.  The quality of the candidate and his race are observable.  The current members decide collectively whether to make a job offer to the candidate.

What’s not observable is whether the applicant is biased against the other race.  A biased member prefers not to belong to an organization with members of the other race.  In particular, if hired, he will tend to vote against hiring them.

Unbiased non-minority members of such an organization will optimally hold minority applicants to a lower quality standard, at least initially.  The reason is simple.  An organization with no minority members will have their job offers more often accepted by biased non-minority candidates who will then make it harder to hire high quality minority candidates in the future.  Since bias is not observable, affirmative action is an alternative instrument to ensure that the organization is not hospitable to those who are biased.

The effect is weaker in the opposite direction.  Even if there are minority applicants who are biased in favor of minorities, their effect on the organization’s decision-making will be smaller because they are in the minority.  So at the margin there is a gain to practicing at least some affirmative action.

(This also explains why every economics department should have at least one structural and one reduced-form empirical economist.)

The New York Times has another great story about the credit/debit card market.  The main idea is that Visa pushed up merchant fees (e.g. it’s charge to the grocery store, gas station etc. where you shop) and split the revenue with the issuing bank (e.g. the Bank of America Visa card).  Mastercard soon followed and prices charged to merchants increased because of competition:

“What we witnessed was truly a perverse form of competition,” said Ronald Congemi, the former chief executive of Star Systems, one of the regional PIN-based networks that has struggled to compete with Visa. “They competed on the basis of raising prices. What other industry do you know that gets away with that?”

If consumers see lower prices when Mastercard undercuts Visa on merchant fees, the traditional model of competition would apply.  There are two reasons this cannot happen (1) prices are not allowed to differ between cash and cards, let alone one network or bank to another, and (2) the merchant would have to pass on the lower cost to buyers.

The simplest way to remedy this would be to allow merchants to charge different prices for different methods of payment. If your Mastercard gets you gas for cheaper than your Visa, you’ll use your Visa and Visa will not be able to raise fees so easily.  If paying by cash gets you a better price, this disciplines both Visa and Mastercard and issuing banks.

Update:  Here is an interesting article by Josh Gans about this topic.  He seems to suggest that allowing merchants to charge surcharges for credit card rules would help to cut merchant fees and has papers on the topic.  Allowing surcharges is similar to allowing different prices for different payment methods.  And I noticed that there is some lively dialog at Marginal Revolution.

To those following me on Twitter, I am not losing my mind. (Or at least not any faster than always.)

  1. Don’t go near that tree, there’s a guy who looks just like Danny Bonaduce perched up there hurling pears at unsuspecting passersby. (Partridge in a pear tree.)
  2. 11th-hour negotiations avert war between the two great superpowers of the turtle world. (Two turtle doves, get it? 🙂 )
  3. Frottez les trois poules avec du romarin, puis faites-les revenir dans une poêle profonde avec de l’ail. (Three french hens)
  4. This is getting out of hand. Four times already this morning! How do I register for the avian do-not-call list? (Four calling birds.)
  5. five golden rings
  6. I was frozen with terror. But then I had a vision. Half a dozen geese. All my fears were put to rest. (Six geese allaying.)
  7. Someone threw my favorite Sufjan Stevens album into Lake Michigan. (See here.)
  8. Sir great news from the servants in the dairy. I know you’ve been worried about the cows, but today 8 made some milk, King Hexanoel. (Say out loud “8 made some milk King.”)
  9. Madame, we have shoes for your Christmas ball somewhere in these boxes. hmm…Men’s running? No. Ah here it is, Size 9 Ladies’ Dancing.
  10. Let’s get this party started, where are those lords I keep hearing about? What, sleeping?? Off with their heads! Wait, what? Oh never mind. (10 lords a leaping, not sleeping.)
  11. I feel like a sewer rat being pulled in 11 different directions. (11 pipers piping.)
  12. Hey you two elves, grab your sticks and give me a drum roll. This is the grand finale…#twelvetweetsofchristmas. (Two elf drummers drumming.)

Nolan Miller, a professor at Urbana-Champaign and I wrote a prospective op-ed which we submitted to the New York Times.  It was written around the time Obama made his big speech about Afghanistan and the date he was suggesting for starting to draw down forces.  You’ll find it below.  After we submitted, there were some op-eds the Times itself published – they did not accept our’s.    Check out this one after reading our attempt:

The President’s long-awaited Afghanistan policy has been revealed: a “surge” of 30,000 more troops with an “exit ramp” beginning in July 2011.  Leading Republicans praised the surge but condemned the preordained departure date , claiming that the Taliban will lie low and reemerge when we leave.

The Obama administration says that the Republicans are missing the bigger picture: the Afghan government needs to step up, and if we give them a “blank check”, they will never do their job. Obama says that the withdrawal date is “locked in” and that our hard deadline forces Karzai to build a security force rather than rely on us to spend our own precious resources and lives on his behalf.   And the 18-month surge gives us the breathing room to help Karzai man and train his army.

However, other administration officials, most notably Secretaries Clinton and Gates and General McChrystal are singing a different, more nuanced, song.   They say that although July 2011 is the expected turning point in Afghanistan, when we can begin to leave without risking another backslide, this date is flexible.  The President’s strategy is to begin leaving only if conditions on the grounds are favorable. According to McChrystal, “We will not decrease coalition forces without the increase of Afghan national security forces capability.”  In other words, we have initiated an open-ended surge, and we will stay there for at least eighteen more months. If Karzai hears this message, he will think withdrawal is not locked in, and he would be right to interpret it as, if not a blank check, also not a last chance.

For an administration that is known for staying on message, this statement on a critical policy is remarkably muddied.  That is, of course, unless the muddied message is the message.

The Obama Afghanistan strategy needs to walk a fine line, playing to multiple constituencies both at home and abroad.  The domestic audiences are clear.  Abroad, Obama needs Karzai to believe that this is a make-or-break point for him, and that we won’t be there to back him up indefinitely.  Hence the need to be “locked in” to a firm withdrawal date.

The target of the other half of the mixed message is not the Afghan government, or even the Taliban, but the third and most difficult player facing the Americans: Pakistan.  Once the U.S. ramps up its efforts in Afghanistan, the Taliban will undoubtedly run to the mountains of Pakistan, where they will join their old friend Osama bin Laden.  So, we will have to rely on Pakistan to perform the second part of the pincer movement that cuts off our enemies.

Pakistan has always had a complex relationship with the Taliban and Al Qaeda.  Fanatics provide a ready supply of volunteers for attacks on India, and Pakistan is reluctant to turn on its former allies.  And if Karzai falls, their old allies the Taliban will be back in power and they can go back to living in relative harmony.  To persuade Pakistan to turn on its allies who are our enemies, they have to think we are in Afghanistan until the Taliban threat is eliminated. The quickest way to get us out of the region is to give us what we want, and what we want is a stable Afghanistan and the elimination of the Taliban and Al Qaeda.

So, Obama is trying to send one message to Karzai (“Our departure is locked in”) and another to Pakistan (“We are here as long as it takes, so help us or else”).  Muddling these two messages together is as confusing to them as it is to us.  Worse, there is a danger that each hears the wrong message.  If Karzai hears we are flexible and Pakistan hears that departure is locked in, neither will help us.  And the Taliban can run across the border and wait and see which of these two strategies Obama will actually employ in 2011.

At best, Obama’s strategy is to send two quite contradictory messages at the same time and have each side hear the message he wants them to hear. At worst, it is a compromise of different views within his administration.  In the confusion, the wrong message may get through to each side and American soldiers will have to pick up the slack left by a confusing policy.  Then, Obama will regret not choosing one clear transparent strategy where at least one side, either Karzai or Pakistan, would have been forced to step up.

Readers of this blog know that I view that as a very good thing.

Justin Rao from UCSD analyzes shot-making decisions by the Los Angeles Lakers over the course of 60 games in the 2007-2008 NBA season.  He collected data on the timing of the shot and identity of the shooter and then recorded additional data such as defensive pressure and shot location by watching the games on video.  The data were used to check some basic hypotheses of the decision theory and game theory of shot selection.

The team cooperatively solves an optimal stopping problem in deciding when to take a shot over the course of a 24 second possession.  At each moment a shot opportunity is realized and the decision is whether to take that shot or to wait for a possibly better opportunity to arise.  Over time the option value of waiting declines because the 24 second clock winds down and the horizon over which further opportunities can appear dwindles.  This means that the team becomes less selective over time.  As a consequence, we should see in the data that the success rate of shots declines on average later in the possession.  Justin verifies this in the data.

Of course, the shot opportunities do not arise  exogenously but are the outcome of strategy by the offense and defense.   The defense will apply more pressure to better shooters and the offense will have their better shooters take more shots.  Both of these reduce the shooting percentage of the better shooters and raise the shooting percentage of the worse shooters.  (For example when the better shooter takes more shots he does so by trying to convert less and less promising opportunities.)

With optimal play by both sides, this trend continuues until all shooters are equally productive.  That is, conditional on Kobe Bryant taking a shot at a certain moment, the expected number of points scored should be the same as the alternative in which he passes to Vladimir Radmanovic who then shoots.  To achieve this, Kobe Bryant shoots more frequently but has a lower average productivity.  Also the defense covers Radmanovic more loosely in order to make it relatively more attractive to pass it to him.  This is all verified in the data.

Finally, these features imply that a rising tide lifts all boats.  That is, when Kobe Bryant is on the court, in order for productivities to be equalized across all players it must be that all other players’ productivities are increased relative to when Kobe is on the bench.  He makes his teammates better.  This is also in the data.

The equal productivity rule applies only to players who actually shoot.  In rare cases it may be impossible to raise the productivity of the supporting cast to match the star’s.  In that case the optimal is a corner solution: the star should take all the shots and the defense should guard only him. On March 2, 1962 Wilt Chamberlin was so unstoppable that despite being defended by 3 and sometimes 4 defenders at once, he scored 100 points, the NBA record.

  1. The top 20 internet lists of 2009. It starts off with a bang at #20:  5 cats that look like Wilford Brimley.
  2. A million lists of top-three books of 2009. I was not asked, but if I were asked I would have proven how literate and practical I am by listing Bolano’s 2666 even though I didn’t read it (and it was published in 2004.)
  3. Year-end list of lists about jazz in 2009. Coming in at #1 on the list of worst mustaches in the Bill McHenry Quintet… Bill McHenry!
  4. The Noughtie List.  From Kottke.org. It’s got almost everything covered.  One omission: I did not find the list of things not listed on that list.

My sketch of the snowball fight reminded Eddie Dekel of a popular children’s game.  After he described it to me, I recognized it as a game I have seen my own kids play.  It works like this.  Two kids face off.  At each turn they simultaneously choose one of three actions: load, shoot, defend. (They do this by signaling with their arms: cock your wrist to load, make a gun with your fist to shoot, cross your arms across your chest to defend.  They first clap twice to synchronize their choices, just like in rock-paper-scissors.)

If you shoot when the other is loading you win.  You cannot shoot unless you have previously loaded.  If you shoot unsuccessfully (because the opponent either defended or also shot) your gun is empty and you must reload again.  (Your gun holds only one bullet.  But Eddie mentioned a variant in which guns have some larger, but still finite, capacity.)

The game goes on until someone wins.  In practice it usually ends pretty quick.  But what about in theory?

First a little background theory.  This is a symmetric, zero-sum, complete information multi-stage game.  If we assign a value of 1 to winning and 0 to losing, the symmetric zero-sum nature means that each player can guarantee an expected payoff of 1/2.  In that respect the game is similar to rock-scissors-paper.  Indeed the game appears to be a sort-of dynamic extension of RSP.

But, despite appearances, it is actually much less interesting than RSP.  In RSP, the ex ante symmetry (each player expects a payoff of 1/2) is typically broken ex post (often one player wins and the other loses, although sometimes it is a draw.)  By contrast, with best play LSD (load, shoot, defend silly I actually don’t know if it has an official name) is never decisive and in fact it never ends.

Here’s why.  The game has four “states” corresponding to how many bullets (zero or one) the two players currently have in their guns.  Obviously the game cannot end when the state is (0,0) and since playing load is either forbidden (depending on the local rules) or dominated when the state is (1,1), the game cannot end there either.

So it remains to figure out what best play prescribes when the game is imbalanced, either state (1,0) or (0,1).  The key observation is that just as at the beginning of the game, where symmetry implied that each player had an expected payoff of 1/2, it is still true at this state of the game that even the weaker player can guarantee an expected payoff of 1/2. Simply defend.  Forever if need be.  There is no reason to think that this is an optimal strategy but still its one strategy at your disposal so you certainly can’t do worse than that.

The surprising thing is that best play requires it.  To see why, suppose that the weaker player chooses load with positive probability.  Then the opponent can play shoot with probality 1 and the outcome is either (shoot, load) [settle down Beavis] in which case the opponent wins, or (shoot, defend) in which case the game transits to state (0,0).  Since the value of the first possibility is 1 and the value of the second is 1/2 (just as at the start of the game), this gives an expected payoff to the opponent larger than 1/2.  But since payoffs add up to 1, that gives the weaker player an expected payoff less than 1/2 which he would never allow.

So the weaker player must defend with probability 1 and this means that the game will never end.  Pretty boring for the players, but rather amusing for the spectators.

We can try to liven it up a bit for all involved.  The problem is that its not really like RSP or its 2-action cousin Matching Pennies which work the way they do because of the cyclical relation of their strategies.  (Rock beats Scissors which beats Paper which beats Rock…)  We can add in an element of that by removing the catch-all defend action and replace it with two actions defend-left and defend-right (say the child leans to either side.)  And then instead of plain-old shoot, we have shoot-left and shoot-right. Your shot misses only when you shoot to the opposite side that he leans.  There are a number of ways to rule on what happens when I shoot-left and he loads, but I would guess that anything sensible would produce a game that is more interesting than LSD.

The safeguards that are employed in airport security policy are found using the “best response dynamic”: Each player chooses  the optimal response to their opponent’s strategy from the last period.  So, the T.S.A. best-responds to the shoe bomber Richard Reid and a terrorist plot to blow up planes with liquid explosives.  We end up taking our shoes off and having tiny tubes of toothpaste in Ziplock bags.  So, a terrorist best-responds by having a small device divided into constituent parts and hidden in his underwear.  One part has to be injected into another via a syringe and the complications that ensue prevent the successful detonation of the bomb.  In this sense, each player is best-responding to the other and the airport security policy, by making it a bit harder to carry on a complete bomb, succeeded with a huge dose of good luck thrown in.

What should we learn from the newest attempt to blow up an airplane?

First and most obviously, the best way to minimize the impact of terrorism is to stop terrorists before they can even get close to us.  This appears to be the main failure of security policy in the recent incident – more focus on intelligence and filtering of watch lists is vital.  Second, the best response dynamic should not be the only way to inform policy.  There are already rumors that no-one will be allowed to walk around for the last hour of the flight or have personal items on their lap.  Terrorists will respond to these policies by blowing up planes earlier in flight.  Does that make anyone feel any safer or the terrorists less successful?  The main problem is that terrorists are thinking up new schemes to get to nuclear power stations, kidnap Americans abroad and other horrible things that should being brainstormed and pre-empted.  The best response dynamic is backward looking and cannot forecast these problems or their solutions.  This second point is also obvious.  The fact that a boy whose father turned him in got on a plane with a bomb suggests that even obvious points are worth making.

Telephone is a kids’ game where the children sit in a circle and pass around a message by whispering in the neighbor’s ear.  The game is to see how garbled the message gets by the time it makes it all the way around the circle.  My 21st century version of telephone is played solitaire, using Google Translate.

  1. No, the only option at this point is to tackle her, landing you both deep in the snow
  2. (English -> French) Non, la seule option possible à ce point est à sa face, vous êtes tous deux d’atterrissage dans la neige
  3. (French -> Italian) No, l’unica opzione a questo punto è sul suo volto, si sono entrambi di atterraggio in mezzo alla neve
  4. (Italian -> Spanish) No, la única opción en este momento está en su cara, han de aterrizaje en la nieve
  5. (Spanish -> Chinese) 沒有,唯一的選擇在這個時候是你的臉,已經降落在雪地裡
  6. (Chinese -> Estonian) Ei, ainus võimalus sel ajal on oma nägu, on lossitud lumi
  7. (Estonian -> Afrikaans) Nee, die enigste opsie is op die oomblik is op jou gesig, dit geland in die sneeu
  8. (Afrikaans -> Turkish) Hayır, şu anda tek seçenek in your face, bu kar indi
  9. (Turkish -> English) No, currently the only option in your face, the snow fell

Hilarity ensues!  (As usual, one of the kids seems to be sabatoging the game.  Here we can see that it is the child translating from Afrikaans to Turkish.)

If this is all obvious, forgive me I came late to this (I grew up in Orange County, CA where it last snowed in December of Yeah Right.)

The first thing to do, obviously is to make a snowball.  Your enemy combatant will do the same.  You each now have one snowball in your stockpile.  What next?

If you throw your snowball you will be unarmed and certain to pay the consequences.  So you don’t.  Neither does she.  You are at a standoff, but very soon you figure out what to do while you wait for the standoff to resolve.  Make another snowball.  Of course she does the same.

Now you each have an arsenal of two snowballs.  Two is very different from one however because if you throw your snowball you still have one to defend yourself with.  But you will have one fewer than she.  This still puts her at an advantage because once you use your last snowball you are again unarmed.  So you will only throw your first snowball if you have a reasonable chance of landing it.

The alternative is to make another snowball.  Which of these is the better option depends on what she is expecting.  If she knows you will throw, she is prepared to dodge it and then press her advantage.  If she knows you will make another one she will wait for you to reach down into the snow when you are most vulnerable and she will draw first blood.

So you have to randomize.  So does she.  There are two possible outcomes of these independent randomizations.  First, one or two snowballs may fly resulting in a sequence of volleys which eventually deplete your stocks down to one or two snowballs left.  The second possibility is that both of you increase your stockpile by one snowball.

Thus, equilibrium of a well-played snowball fight gives rise to the following stochastic process.  At each stage, with a certain positive probability, the stockpiles both increase by one snowball.  This continues without bound until, with the complementary probability in each stage, a fight breaks out depleting both stockpiles and beginning the process again from zero.

Special mention should be made of a third strategy which is to be considered only in special circumstances.  Rather than standing and throwing, you can charge at her and take a shot from close range.  This has the obvious advantages but clearly leaves you defenseless ex post.  Running away should be ruled out because you will be giving up your entire store of snowballs and eventually you will have to come back.  No, the only option at this point is to tackle her, landing you both deep in the snow.  With the right adversary, this mutually assured destruction could be the best possible outcome.

It’s trendy to get your economist on around the holidays and complain about the inefficiency of gift exchange.  Giving money is a more efficient way to make the recipient better off.  But that’s a fallacy that only trips up poser-economists.  To a real economist, that’s like observing that eating an omelette is an inefficient way to get all of the nutrients we need in our breakfast.  Yeah, so?  That’s not why I ate it.

A real economist recognizes unregulated, voluntary exchange when he sees it.  He doesn’t bother inventing some hypothetical motivation for the exchange because he understands revealed preference. If they are doing it voluntarily then it is efficient, regardless of what they think they are getting out of it.  Indeed, the pure consumption value of buying a plaid sweater for somebody is a perfectly good motivation.  And since the recipient voluntarily accepts the gift, even better.  If there was a Pareto superior alternative they would have done that instead.

So this holiday, swat that poser economist in red off your left shoulder, hold hands with the real economist in white on your right shoulder and give to your hearts’ content.  (Oh and I am very easy to shop for.  Just don’t forget to include a gift receipt!)

Or dead salmon?

By the end of the experiment, neuroscientist Craig Bennett and his colleagues at Dartmouth College could clearly discern in the scan of the salmon’s brain a beautiful, red-hot area of activity that lit up during emotional scenes.

An Atlantic salmon that responded to human emotions would have been an astounding discovery, guaranteeing publication in a top-tier journal and a life of scientific glory for the researchers. Except for one thing. The fish was dead.

Read here for a lengthy survey of the pitfalls of fMRI analysis. Via Mindhacks.