I do most of my reading through Google Reader, and when I get an idea for the blog I post it to Google Buzz. The small number of followers I have there will sometimes comment and/or vote for ideas that pop up there. When its time to write something for the blog, I go back there for ideas.

Buzz will soon be retired by Google and Reader is going to be crippled. That’s going to affect me. For one thing, I won’t have access to Google Reader feeds from many of my favorite curators, most notably Courtney Conklin Knapp, the soy sauce of the internet. And I need a new place to pre-test my ideas. (I used to think that I would use Twitter for that, but my Twitter identity has become overrun with tweets like “I went back in time so I could be the first person to write about the paradoxes of time travel.”)

Google says that the retired services are to be replaced by Google+ so I am switching to Google+. In fact I have been using it for a while now, jotting down some ideas and getting feedback before posting them here. Google+ is more of a “social” social network than Buzz so instead of just dumping links and incomprehensible notes-to-self, I am writing little rough drafts. It works well. Writing doesn’t come easy for me but for some reason when I know that what I am writing is verifiably a rough draft, I loosen up a bit and it comes easier. The feedback is great too.

(One potential downside is that I write something wrong, people on G+ point out that its wrong and I am too embarrassed to post it here. You learn a lot from wrong ideas so that would be a loss.)

So if you are on G+ I hope to see you there, and I would be happy to get your feedback.

By the way, know any good blogs? It seems like a large number that I subscribe to on Reader have gone dark so I am looking for some new ones. “What’s Hot” in Google Reader is already gone!

Social Choice Theorists are going to see an experiment in action as San Francisco votes for a new mayor using rank-order voting.  Here is how it works:

Each voter lists up to three candidates in ranked order: First, second and third choice.
If one candidate gets more than 50 percent of the first-place votes in the first round of counting, he’s the winner and there’s no need to look at the second and third choices.
But if no one has a majority, the candidate with the fewest number of votes is eliminated from the future count and his second-choice votes are distributed to the remaining candidates.
If still no one cracks the 50 percent mark, then the candidate with the second-lowest vote total is eliminated and his second-place votes are distributed. If the voters’ second choice already was eliminated, it’s the third-choice vote that goes back into the pool.
This continues until one candidate has a majority of the remaining votes. Last November, it took 20 rounds before Malia Cohen finally was elected as supervisor from San Francisco’s District 10.

There are two strategic issues.  First, there must be an incentive for strategic voting via Gibbard-Satterthwaite/Arrow.  Hence, sincere voting and strategic voting will differ.  Second, the candidate positions and in fact the issue of who enters as a candidate is a key factor in the rationale for switching to rank order voting in the first place.  Some voters must hope that third party candidates can now enter and have a chance of winning.  Others must hope that more centrist policies are adopted by the candidates in the hope of being voters’ second or third choice.

I assume there are many formal theory papers in political science on this but am not familiar with them…anyone have any ideas?

Advertisers want information about your tastes and habits so they can decide how much they are willing to pay to advertise at you.  That information is stored by your web browser on your hard drive.  Did you know that every time you access a web page you freely hand over that information to a broker who immediately sells it to advertisers who then immediately use it to bid for access to your eyeballs?

Here’s how it works.  Internet vendors, say Amazon, pass information to you about your transactions and your browser stores them in the form of cookies. Later on, advertisers are alerted when you are accessing a web page and they compete in an auction for the ad space on that page.  At that moment, unless you have disabled the passing of cookies, your browser is sending to potential advertisers all of the cookies stored on your hard drive that might contain relevant information about you.

However, many of the really valuable cookies are encrypted by the web site that put them there.  For example, if Amazon encrypts its cookies then even though your browser gives them away for free, they are of no use to advertisers.

That is, unless the advertisers purchase the key with which to decrypt your cookies. And indeed Amazon will make money from your data by selling its keys to advertisers.  It could sell them directly but it will probably prefer to sell them through an exchange where advertisers come to buy cookies by the jar.

The interesting thing about the market for cookies is that you are the owner of the asset and yet all of the returns are going to somebody else.  And its not because your asset and mine are perfect substitutes.  You are the monopolistic provider of information about you and when you arrive at a website it is you the advertisers are bidding for.

How long will it be before you get a seat at the exchange?  Nothing stops you from putting a second layer of encryption over Amazon’s cookies and demanding that advertisers pay you for the key.  Nothing stops me from paying you for exclusive ownership of your keys, cornering the market-in-you, and recouping the monopoly profit.

File under “Feel Free To Profit From This Idea.

(I learned about the market for cookies from Susan Athey’s new paper and a post-seminar dinner conversation with her and Peter Klibanoff, Ricky Vohra, and Kane Sweeney.)

(Picture:  Scale Up Machine Fail from http://www.f1me.net)

  1. How to do a credible Italian accent.
  2. “What I’d ask is that you not copyedit this like a Freshman essay.”
  3. Duet with Siri.
  4. See if you can watch the first 2 minutes of Straight No Chaser and not watch the entire 90 minute movie.
  5. Dudley Moore’s Beethoven parody.
  6. Woody Allen is funnier than Ethan Coen.

Via BoingBoing, why is the Indiana Election Commission putting cubes of styrofoam in their mailers?

The Styrofoam cube enclosed in this envelope is being included by the sender to meet a United States Postal Service regulation. This regulation requires a first class letter or flat using the Delivery or Signature Confirmation service to become a parcel and that it “is in a box or, if not in a box, is more than 3/4 of an inch thick at its thickest point.” The cube has no other purpose and may be disposed of upon opening this correspondence.

The price of a 6.5oz bottle of Coca Cola stayed fixed at 5c from 1886 to 1959. Daniel Levy and Andrew Young document this fact and ask why this might have been the case in a period that saw two world wars and the Great Depression.  They offer two technological explanations for price rigidity:

First, we demonstrate that an installed base of vending machines with nickel-only capability, and the
evolution of the technology that could accommodate multiple type coins and change making,
imposed an important constraint on the ability of the Coca-Cola Company to adjust the Coke’s
price. Second, at the 5¢ price per serving, the smallest price increase compatible with the
consumer still using a single coin was a 100 percent jump to 10¢. A monetary transaction
technology for smaller price adjustment while keeping consumer “inconvenience costs” low in
terms of the number of coins needed for purchasing a bottle of Coca-Cola, was not available.

How can you get around these technological constraints?

You could lobby your friends:

Woodruff [Coca Cola CEO] submitted a request in 1953 to the newly elected President Dwight Eisenhower (his hunting companion and friend) himself, to get the U.S. Department of Treasury mint a new 7 1/2-cent coin.
Eisenhower forwarded the request to the Treasury Department officials who did not like the idea.

Or ingeniously in an early example of mechanism design, you could randomize bottle delivery while retaining the nickel technology in the vending machine:

“Instead of offering one ‘Coke’ for 6¢ the coin cooler offers 8 ‘Cokes’ for 45¢,
which is only 5.625¢ (5 5/8¢) per bottle. [The] coin cooler [delivers] either an
empty bottle or no bottle at all for one nickel in every nine deposited. This
absence of ‘Coke’ is called an official blank. Please be warned that, if you fail to
deposit nine nickels, at worst you will strike the blank and have to deposit
another nickel for your ‘Coke.’ At best you will miss the blank (8 times out of 9)
and your ‘Coke’ will cost only a nickel, but as stated, on the average ‘Coke’ sells
for 5.625¢ per bottle—the only price at which it is offered”

The plan might actually have been tried out in Chicago and Canada!

(Hat Tip: Tilman Klumpp and Xuejuan Su)

Assume that people like to have access to a community of people with similar habits, tastes, demographics, etc.  A “community” is just a group of some minimal absolute size.  Then the denser the population the more likely you will find enough people to form such a community.

But this effect is larger for people whose tastes, habits, and demographics are more idisyncratic than for people in the majority.  Garden-variety people will find a community of garden-variety people just about anywhere they go.  By contrast, if types of people are randomly distributed across locations, the density of cities makes it more likely that a community can be assembled there.

But that means that types won’t be just randomly distributed across locations. The unique types are willing to pay more to live in cities than the garden-variety types.

From the great blog Mind Hacks:

Because of this, the new study looked at volleyball where the players are separated by a net and play from different sides of the court. Additionally, players rotate position after every rally, meaning its more difficult to ‘clamp down’ on players from the opposing team if they seem to be doing well.

The research first established the belief in the ‘hot hand’ was common in volleyball players, coaches and fans, and then looked to see if scoring patterns support it – to see if scoring a point made a player more likely to score another.

It turns out that over half the players in Germany’s first-division volleyball league show the ‘hot hand’ effect – streaks of inspiration were common and points were not scored in an independent ‘coin toss’ manner.

Quoting The Angry Professor:

I need a gmail filter that sends a custom vacation message to certain people. The filter needs to scan the message for the latest date mentioned and determine the geographical location of the sender. It must next add four weeks to that latest date. The longitude of the sender’s geographic location must be advanced by 180o and the latitude multiplied by -1. After triangulating the dry land nearest to the rotated coordinates, the filter must finally send the following “vacation” message:

Thank you for your email. I am [on dry land nearest the point exactly half-way around the globe from you]. I will not have email contact until [computed date], but I will try to respond to your message as soon as I return.

Via Vinnie Bergl, here is a post which examines pitch sequences in Major League Baseball, looking for serial correlation in the pitch quality, i.e. fastball, changeup, curve, etc.  The motivating puzzle is the typical baseball lore that. e.g. the changeup “sets up” the fastball.  If that were true then the batter knows he is going to face a fastball next and this reduces the pitcher’s advantage.  If the pitcher benefits from being unpredictable then there should be no serial correlation.  The linked post gives a cursory look at the data which shows in fact the opposite of the conventional lore:  changeups are followed by changeups.

There is a problem however with the simple analysis which groups together all pitch sequences from all pitchers.  Not every pitcher throws a changeup.  Conditional on the first pitch being a changeup, the probability increases that the next pitch will be a changeup simply because we learn from the first pitch that we are looking at a pitcher who has a changeup in his arsenal.  To correct for this the analysis would have to be carried out at the individual level.

Should we expect serial independence?  If the game was perfectly stationary, yes.  But suppose that after throwing the first curveball the pitcher gets a better feel for the pitch and is temporarily better at throwing a curveball.  If pitches were serially independent, then the batter would not update his beliefs about the next pitch, the curveball would have just as much surprise but now slightly more raw effectiveness.  That would mean that the pitcher will certainly throw a curveball again.

That’s a contradiction so there cannot be serial independence.  To find the new equilibrium we need to remember that as long as the pitcher is randomizing his pitch sequence, he must be indifferent among all pitches he throws with positive probability.  So we need to offset the temporary advantage of a curveball this is achieved by the batter looking for a curveball.  That can only happen in equilibrium if the pitcher is indeed more likely to throw a curveball.

Thus, positive serial correlation is to be expected.  Now this ignores the batter’s temporary advantage in spotting the curveball.  It may be that the surprise power of a breaking pitch is reduced when the batter gets an earlier read on the rotation.  After seeing the first curveball he may know what to look for next and this may in fact make a subsequent curveball less effective, ceteris paribus.  This model would then imply negative serial correlation:  other pitches are temporarily more effective than the curveball so the batter should be expecting something else.

That would bring us back to the conventional account.  But note that the route to “setting up the fastball” was not that it makes the fastball more effective in absolute terms, but that it makes it more effective in relative terms because the curveball has become temporarily less effective.

The latter hypothesis could be tested by the following comparison.  Look at curveballs that end the at bat but not the inning.  The next batter will not have had the advantage of seeing the curveball up close but the pitcher still has the advantage of having thrown one.  We should see positive serial correlation here, that is the first pitch to the new batter should be more likely (than average) to be a curveball.  If in the data we see negative correlation overall but positive correlation in this scenario then it is evidence of the batter-experience effect.

(Update:  the Fangraphs blog has re-done the analysis at the individual level and it looks like the positive correlation survives.  One might still worry about batter-specific fixed effects.  Maybe certain batters are more vulnerable to the junk pitches and so the first junk pitch signals that we are looking at a confrontation with such a batter.)

Bruce Riedel who ran President Obama’s AfPak review now favors containment over engagement with Pakistan as I mentioned in Part 1.  He thinks military aid should be reduced sharply and substituted with reduced tariffs and the like.  How is the Pakistani army going to respond?  In the past, scientist A Q Khan sold nuclear secrets to countries that are hostile to the United States.  Was the Pakistani Army complicit in the nuclear trade?  That information is not in the public domain.

What is to stop the Pakistani Army employing or reverting to nuclear trade under containment?  Again, thinking of the United States as the Principal and the Pakistani Army as the Agent, the principal has to come up with some way to give incentives to the agent.  With commitment, the obvious instrument is a threat, e.g. the United States will attack/invade Pakistan if they observe a sign of nuclear trade. If the threat is credible, the Pakistani Army will not engage in nuclear trade and the United States will not have to carry out its threat.  But the threatened action is costly and it is impossible to commit to carry out the threat in the scenario where it is meant to be implemented. Hence, there would be a credibility issue even if Pakistan were not nuclear.  The fact they are nuclear makes the threat incredible.  Indeed, it is hard to see any “stick” that can credibly be used to give incentives.

That leaves “carrots”.  The United States can make a transfer to the Pakistani Army if and only if there is no sign they are engaging in nuclear trade.  It is credible to take the carrot away if there is a sign of trade. If there is no sign of nuclear trade, it is credible to make the transfer because the Pakistani Army has a credible threat of start nuclear trade if a transfer is not forthcoming – after all they get surplus from nuclear trade.  If the transfer is large enough and signals of nuclear trade are more likely if nuclear trade actually occurs, the Pakistani Army will play along with its part in the equilibrium too.

This logic is familiar from the efficiency wage model of Shapiro and Stiglitz – if the principal cannot use sticks, she must use carrots and ends up giving surplus to the agent.  But here I only want to point out an irony: under the old regime of engagement, the United States paid the Pakistani Army to take a costly effort and hunt down terrorists; in the (possible) new regime of containment, the Unites States pays the Pakistani Army not to go take the profitable action of engaging in nuclear trade.  Either way, a transfer is made.

For three weeks in Italy:

The “Vivace” (vivacious) is a burger topped with bacon, salted spinach, marinated onions and mayonaise with mustard seeds.

The “Adagio” (slowly, like the musical term) is also a hamburger, topped with sweet-and-sour eggplant strips, sliced tomatoes and salted ricotta, all between a bun covered in sliced almonds.

There’s also a tiramisu.

via Arthur Robson:

While appeals often unmask shaky evidence, this was different. This time, a mathematical formula was thrown out of court. The footwear expert made what the judge believed were poor calculations about the likelihood of the match, compounded by a bad explanation of how he reached his opinion. The conviction was quashed.

And the judge ruled that Bayes’ law for conditional probabilities could not be used in court.  Statisticians, Mathematicians, and prosecutors are worried that justice will suffer as a result.  The statistical evidence centered around the likelihood of a coincidental match of shoeprint with shoes owned by the Defendant.

In the shoeprint murder case, for example, it meant figuring out the chance that the print at the crime scene came from the same pair of Nike trainers as those found at the suspect’s house, given how common those kinds of shoes are, the size of the shoe, how the sole had been worn down and any damage to it. Between 1996 and 2006, for example, Nike distributed 786,000 pairs of trainers. This might suggest a match doesn’t mean very much. But if you take into account that there are 1,200 different sole patterns of Nike trainers and around 42 million pairs of sports shoes sold every year, a matching pair becomes more significant.

Now if I can prove to jurors that there was one shoe in the basement and another shoe upstairs, then probably I can legitimately claim to have proven that the total number of shoes is two because the laws of arithmetic should be binding on the jurors deductions.  And if there is a chance that a juror comes to some different conclusion then it would make sense for an expert witness, or the judge even, tell the juror that he is making a mistake.  Indeed a courtroom demonstration could prove the juror wrong.

But do the “laws” of probability have the same status?  If I can prove to the juror that his prior should attach probability p to A and probability q to [A and B], and if the evidence proves that A is true,  should he then be required to attach probability q/p to B?  Suppose for example that a juror disagreed with this conclusion. Could he be proven wrong?  A courtroom demonstration could show something about relative frequencies, but the juror could dispute that these have anything to do with probabilities.

It appears though that the judge’s ruling in this case was not on the basis of bayesian/frequentist philosophy, but rather about the validity of a Bayesian prescription when the prior itself is subjective.

The judge complained that he couldn’t say exactly how many of one particular type of Nike trainer there are in the country. National sales figures for sports shoes are just rough estimates.

And so he decided that Bayes’ theorem shouldn’t again be used unless the underlying statistics are “firm”. The decision could affect drug traces and fibre-matching from clothes, as well as footwear evidence, although not DNA.

This is a reasonable judgment even if the court upholds Bayesian logic per se.  Because the prior probability of a second pair of matching shoes can be deduced from the sales figures only under some assumptions about the distribution of shoes with various tread patterns.  The expert witnesses probably assumed that the accused and a hypothetical third-party murderer were randomly assigned tread patterns on their Nikes and that these assignments were independent.  But if the two live in the same town and shop at the same shoe store and if that store sold shoes with the same tread pattern, then that assumption would significantly understate the probability of a match.

Via Tyler Cowen, a quote from Daniel Kahneman on why a sandwich made by someone else tastes better.

When you make your own sandwich, you anticipate its taste as you’re working on it. And when you think of a particular food for a while, you become less hungry for it later. Researchers at Carnegie Mellon University, for example, found that imagining eating M&Ms makes you eat fewer of them. It’s a kind of specific satiation, just as most people find room for dessert when they couldn’t have another bite of their steak. The sandwich that another person prepares is not “preconsumed” in the same way.

Put aside the selection effect that conditional on a person making a sandwich for you, it is likely that they are a better sandwich maker than you.  Even in a randomized sandwich trial the effect would be there but I have a different theory why:

A large part of tasting is actually smelling.  You can verify this by, for example, eating an onion with your nose plugged.  Our sense of smell tends to filter out persistent smells after being exposed to them for awhile so that we cannot smell them anymore.  This means that when you are cooking in the kitchen, surrounded by the aromas of your food, you are quickly de-sensitised to them.  Then when you sit down to eat, it is like tasting without smelling.

When your spouse has done the cooking you were likely in another room, isolated from the aromas.  When you walk into the kitchen to eat, you get to smell and taste the food at the same time.  That’s why it tastes better to you.  The same idea applies to leftovers.  It takes much less time to reheat leftovers than it took to prepare the food in the first place so you retain sensitivity to more of the aromas when it comes time to eat.

Bruce Riedel who ran President Obama’s AfPak review now favors containment over engagement:

It is time to move to a policy of containment, which would mean a more hostile relationship. But it should be a focused hostility, aimed not at hurting Pakistan’s people but at holding its army and intelligence branches accountable. When we learn that an officer from Pakistan’s Inter-Services Intelligence, or ISI, is aiding terrorism, whether in Afghanistan or India, we should put him on wanted lists, sanction him at the United Nations and, if he is dangerous enough, track him down. Putting sanctions on organizations in Pakistan has not worked in the past, but sanctioning individuals has — as the nuclear proliferator Abdul Qadeer Khan could attest.

It is useful to think of the US-Pakistan game as a principal-agent relationship.  The US (principal) would like to “pay for performance” and make a transfer if and only if the Pakistani army (agent) capture terrorists and quash the Taliban.  Performing this task is costly for Pakistan for many reasons. For one, they use the terrorists as proxies in their fight against India.  But if the US values elimination of terrorists enough, there is a transfer or sequence of transfers that are large enough to persuade Pakistan to work hard on America’s  behalf.  For the transfer scheme to work, the US has to be able to commit to pay.  If Pakistan is too successful, then the US has no incentive continue paying them.  Knowing this, Pakistan does not want to work too hard on America’s behalf.  Do enough work to keep the money rolling in but not enough to kill off the goose laying golden eggs.

This delicate balancing act can tip one way or another with random events. After one huge such event, the capture of Osama Bin Laden, the relationship has gone sour.  Perhaps, we are in a new phase, that is the gist of Riedel’s column.  But how should this be managed?  I need to think about part 2….

Ba Le is well known – I found out about it from Check Please of all places. Despite it’s fame, lines are short and service is fast.  The location in Uptown makes it a little out of the way I guess.

The shop mainly sells Vietnamese Bahn’ Mi sandwiches so if you are looking for Pho you’ll have to find somewhere else on Argyll Street.  We’ve gotten the Vegetarian, Chicken and BBQ Pork  bahn’ mi – all were good and popular even with the kids.  The spring rolls from the fridge are also very good.  The hot appetizers should be avoided. The smoothies aren’t so great either.  There is a large choice of Vietnamese desserts, various combinations of beans, corn and tapioco in coconut milk.  I’ve had some I loved and some I hated but I can never remember which ones were good!  I still make the longish drive from Evanston so by revealed preference the good outweighs the bad, especially now I’ve learned what to avoid.

  1. Grant Achatz to do a “cover” of El Bulli.
  2. Solution to the iced coffee problem.
  3. Hooters versus Twin Peaks.
  4. Ganjawine.
  5. Authentically undress your Victorian characters.
  6. A stochastic model of Joseph Conrad’s paragraphs.

The antidote:

While I think my work sometimes serves important purposes, and that (sadly) I am probably better at blogging and running regressions than I am at more direct forms of assistance, perhaps some deeper reflection is in order.

Of course, what I actually do is say to myself, “well, at least I don’t work in finance.”

The brief moment of concern passes, and I turn back to dispassionately regressing death and destruction.

 

Kevin Murphy is a John Bates Clark Medal winner, he has a MacArthur “Genius” Award and is a superstar in the economics profession.  But the green-eyed monster has finally stirred because I found out he is consulting for the basketball players in the current labor negotiations.

Teams pay a luxury tax if they go over a salary cap specified by the league.  The revenue generated by the tax is transferred to the other teams. If the luxury tax is too high, teams will not go over the salary cap and the labor market for payers will be moribund.  But if it is low, the rich teams will go over the salary cap and the poorer teams will get the revenue this generates and will themselves compete to hire players. The labor market for players will be active.  There is some threshold luxury tax below which the market is active and above which it is inactive.  The players want a tax that is below the threshold.  Who might be able to work out this threshold?  Kevin Murphy:

[An ESPN reporter] asked a union official how they know where that player-friendly effect stops, and where the de facto hard cap kicks in.

His answer was that their economist Kevin Murphy had the task of predicting how owners would spend under the last CBA, back when it was new. Looking back, they realize his work was, the official says, “pretty much perfect.”

Who is the consultant to the teams I wonder?

(HT: MR)

The number of laws grows rapidly, yet the number of regulators grows relatively slowly.  There are always more laws than there are regulators to enforce them, and thus the number of regulators is the binding constraint.

The regulators face pressure to enforce the most recently issued directives, if only to avoid being fired or to limit bad publicity.  On any given day, it is what they are told to do.  Issuing new regulations therefore displaces the enforcement of old ones.

Read all of the corollaries.

One rejoinder would begin by observing that the origin of the problem is that future legislators are short-run players.  Given that, it may even be normatively optimal for today’s short-run legislators to speed up the pace of their own regulations so that they are in effect as long as possible before their eventual displacement by the next generation.  Of course this is conditional on today’s regulation being better than the marginal old one being displaced, which is presumably the case otherwise it wouldn’t have been under consideration in the first place.

You and your spouse plan your lifetime household consumption collectively. This is complicated because you have different discount factors.  Your wife is patient, her discount factor is .8; you are not so patient, your discount factor is .5.  But you are a utilitarian household so you resolve your conflicts by maximizing the total household utility.

Leeat Yariv and Matt Jackson show in this cool paper that your household necessarily violates a basic postulate of rationality:  your household preferences are not time consistent.  For example, consider how you rank the following two streams of household consumption:

  1. (0,10,0,0, …)
  2. (0,0,15,0,0, …)

Each of you evaluates the first plan by computing the present value of 10 units of consumption one period from now.  Total household utility for the first plan is the sum of your two utilities, i.e. 10(0.5 +0.8) = 13.   For the second plan you each discount the total consumption of 15 two periods from now.  Total utility for the second plan is  15(0.5^2 + 0.8^2) = 13.35  Your utilitiarian household prefers the second plan.

But now consider what happens when you actually reach date 1 and you re-consider your plan.  Now the total utilities are 20 for the first plan (since it is date 1 and you will each consume the 10 immediately if you choose the first plan) and 15(0.5+0.8) = 19.5 for the second plan.  Your household preference has reversed.

Indeed your household exhibits a present bias:  present consumption looms large in your household preferences, so much so that you cannot forego consumption that, earlier on, you were planning to delay in exchange for a greater later reward.

Jackson and Yariv show that this example is perfectly general. If a group of individuals is trying to aggregate their conflicting time preferences, and if that group insists on a rule that respects unanimous preferences and is not dictatorial, then it must be time inconsistent.

Tom Sargent on why he joined NYU:

”I need other people to do my best work. Economics is like sports — the real stuff is being done by young guys, and you have to work hard to keep up with them. Old guys like me are like boxers — we’ve seen a lot of moves, but our reflexes are slower. There are a lot of young guys here to keep me sharp.”

To add to this: when someone comes up with some supposedly “new” moves, old guys can tell whether they are reinventing the wheel or whether it is a really new move.  Zvi Griliches used to play this role at Harvard.  Does this knowledge help you to come up with some really new moves yourself?  I’m not sure.

Do you know about the Nemmers Prize in Economics?  It is a prestigious prize awarded biennially in recognition of “major contributions to new knowledge or the development of significant new modes of analysis.” It comes with a significant monetary award funded originally via a gift from the Nemmers brothers to Northwestern University.  (There are also Nemmers Prizes in Mathematics and Music Composition.)

The Nemmers wanted the prize to eventually carry a degree prestige that would rival the Nobel Prizes.  So far 9 prizes have been awarded and 5 of those recipients have gone on to win Nobel Prizes.  (The Nemmers charter forbids giving the award to a previous Nobel Laureate.)  They are

  1. Peter Diamond (Nemmers 1994 Nobel 2010)
  2. Tom Sargent (Nemmers 1996 Nobel 2011)
  3. Bob Aumann (Nemmers 1998 Nobel 2005)
  4. Dan McFadden (Nemmers 2000 Nobel later that same year)
  5. Ed Prescott (Nemmers 2002 Nobel 2004)

Indeed each of the first 5 Nemmers winners later won Nobels.  The next four, Ariel Rubinstein, Lars Hansen, Paul Milgrom, and Elhanan Helpman are perennial favorites in department pools and polls.

If you had placed your Nobel bets over the last decade based on the Nemmers record you would have made some money.

John Gruber of M.I.T. who helped design Romney’s healthcare plan in Massachusetts.  In an interview with MSNBC, he says:

Romney is “the father of health-care reform…. think he is the single person most responsible for health care reform in the United States. … I’m not trying to make a political position or a political statement, I honestly feel that way. If Mitt Romney had not stood up for this reform in Massachusetts … I don’t think it would have happened nationally. So I think he really is the guy with whom it all starts.

On the “individual mandate” which stipulates that everyone buy insurance or face a penalty:

“This was a big decision to be made and Governor Romney clearly stated that he believed without an individual mandate healthy people could just free ride on the system.”

On Romney’s claim that the Massachusetts law led to no tax increases:

[Gruber] also noted that the Massachusetts law didn’t require any increase in taxes only because it received federal health-care funds that defrayed the costs of the new law.

These comments are bad for Romney – they weaken him against Obama and Perry.  They are good for Perry. Are they good for Obama?  This depends on whether Perry or Romney would be stronger against Obama…

My sister-in-law asked me how many new PhDs in economics find jobs in academia (as opposed to taking private sector jobs.)  I said “More than half.” Her reply surprised me, for a moment. She said “Really, that few?”

I was surprised because my answer gave her only a lower bound.  “More than half” could easily mean “100%.” But after a moment I realized that my sister-in-law is very sophisticated and her response made perfect sense.

I have now seen this paper presented twice and I really like it.  It’s Gul, Pesendorfer, and Strzalecki modeling the implications of limited attention on asset prices.  They show how competitive equilibrium requires large fluctuations in prices in extreme states of productivity.

Their model is very simple and the logic can be explained in a few sentences. Output in the economy is stochastic so that there are high productivity and low productivity states.  There is one simple behavioral assumption:  each agent is limited in his cognition, (or attention, or flexibility) so that he must partition states into a small number of categories.  His limitation is modeled by a constraint that his consumption must be the same in all states belonging to the same category.

Qualitatively, the results of the paper follow almost immediately now. Consider the very small probability event that productivity is at the extreme low end. Agents will lump these states together with other states and so they will not be able to reduce their consumption in response to the very low output.  So in order for the market to clear there must be some agent, or small set of agents, who do pay attention to these states and reduce their consumption exactly when output is this low.

Think about these agents. They are committing their scarce resource, namely attention, to this rare event.  Its very unlikely that this use of their attention will pay off.  In order to give them enough incentive to do this, the reward must be very large.  And the way to reward them is to make the price extremely high in these low productivity states.  The ability to sell at these extreme high prices is the necessary incentive.

A similar logic explains why prices must be extremely low in high productivity states.  Overall there are large price fluctuations, larger than in a standard economy without the need for these incentives.

The paper is a beautiful example of the value of abstraction.  Rather than getting into details about how complexity/attention constraints actually work, it is enough to model their essential implication, namely the partition.  This keeps the canvas clean for the economic logic to stand out.

There is one conceptual point that I haven’t been able to come to terms with. It has to do with feasibility.  In competitive equilibrium, feasibility–the assumption that total consumption equals the total endowment– is just a way of modeling market clearing.  And market clearing is the essential assumption of competitive equilibrium.

If markets didn’t clear then there would be excess demand and supply and the resulting competitive pressures would cause prices to adjust so that market clearing was restored.  That’s the usual story behind competitive equilibrium. But it doesn’t work here, at least not in the usual way.

For example, suppose that nobody was paying attention to the lowest state. All traders are grouping it with higher productivity states and so they are planning to consume more than the total output in this lowest state.  There will be excess demand.  That can’t be a competitive equilibrium, therefore someone must be paying attention to the lowest state.

But notice we cannot explain this “equilibration” by the textbook story of how prices adjust to clear markets.  No matter how much the price adjusts, since nobody is paying attention to this lowest state, nobody can change their behavior in response to changes in prices.

Instead, the story has to work something like this.  If nobody is paying attention to the lowest state, the price in that lowest state has to rise so that somebody starts paying attention to it.  That is, it’s as if there is some ex ante stage where everybody is paying attention to every state, and on the basis of all that information they decide which states to then stop paying attention to.

Probably this is taking the model too literally and there is an as if interpretation that doesn’t sound so convoluted.  I am still trying to find that.

January of 1996 I was on the junior job market and I had just finished giving a recruiting seminar at the University of Chicago.  This was everything a job market seminar at Chicago was supposed to be.  I barely made it through the first slide, I spent the rest of the 90 minutes moderating a debate among the people in the audience, and this particular debate was punctuated by Bob Lucas shouting “Will you shut up Derek?  Contract theory has not produced a single useful insight.”

Suffice it to say that my job market paper had nothing in common with either contract theory, Bob Lucas, or the Derek in question.  But it was the most fun I have ever had in a seminar.

So I was going out to dinner with Tom Sargent and Peyton Young.  Peyton was visiting the Harris school for the quarter and we would be going in his car to the restaurant.  Actually it was his mother’s car because his mother lived in the area and he was borrowing it while he was visiting.  It was one of those Plymouth Satellite or Dodge Dart kinda cars:  a long steel plinth on wheels. Peyton warned us that it hadn’t been driven much in recent years and it had just gotten really cold in Chicago so there was some uncertainty whether it would actually start.

We got in with me on the passenger side and Sargent in the back seat and sure enough the car wasn’t going to start.  It was making a good effort, the battery was strong and the starter was cranking away but the engine just wouldn’t turn over.  After a while Tom says let him have a crack at it.  I am sitting there freezing my never-been-out-of-California butt off thinking that this is the comical extension of the surreal seminar experience I just went through.  First I had to play guest emcee while they hashed out their unfinished lunchtime arguments, and now I am going to have to get out and push the car through the snow.

But when Sargent got into the front seat there was this look on his face.  I know these American cars, he says, you gotta work with them.  He leans forward to put his ear close to the dashboard, he’s got the ignition in his right hand and his left hand looped around the steering column holding the gear shift.  And then he goes to work.  He turns the key and starts wiggling the gear shift while he is pumping the gas pedal.  This makes the car emit some strange sounds but apart from that it doesn’t accomplish much and he starts over.  He’s mumbling something under his breath about engine flooding, his head is bobbing manically and his eyes are folding down giving the effect of a cross between Doc Brown and Yoda.  In the back seat Peyton appears to have total faith in this guy’s command of the machine, meanwhile I am about to start laughing out loud.

After three or four more cycles, he starts ramping up the body English.  He is bouncing off the seat to get extra leverage on the gear shift. His ear is right up against the steering wheel, his eyes are shut and from the look on his face you would assume he was straining to heed the car’s wheezing, last dying wish. But then there is a different sound.  The dry electric sound of the starter motor starts to give way to the deep hum of internal combustion.  The car begins to bounce along with Professor Tom Sargent.  Bounce, bounce, bounce, vrum –ayngayngayng– vrum ayng vrum -vrum -vrum, BANG.  That backfire knocks me out of my seat, but it just gently opens Sargent’s eyes and Peyton’s look is pretending that he saw all of this coming.

Sargent turns back the ignition, pauses and draws his face back away from the wheel.  His head turns toward me and a grin comes over his face.  He’s saying here it goes, watch this.  He turns the key one last time and the engine rolls over like a cat, stretching out its neck for one more scratch.

“You don’t mind if I drive do you Peyton?”

On Monday, me and some dudes are gonna tailgate outside the Kellogg School of Management before the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel is announced. You should totally come. It’s gonna be ill. My pick to click this year is N. Gregory Mankiw. They’re gonna say it’s for his work on menu costs and price stickiness, but that’s bunk. It’s really so they can hand it over to someone who isn’t Paul Krugman.

Who’s on your Nobel fantasy team?

In related news, Harvard has shut down its Nobel pool (and Al Roth plans on a late breakfast.)

Unreleased ad with Steve Jobs voiceover:

This year’s votes in the Kellogg/NU poll:
Last year’s Kellogg/NU poll predicted the winner of the 2010 prize, if you decoded its message carefully.  It seemed there was “inside information”.   I am having a harder time this year. There are a large number of I.O. economists at Kellogg/NU.  But, even given that, Tirole garners a huge number of votes..is this reflecting inside information or bias?  Another dramatic change is the increase in votes for econometricians – Hausman, White, Hansen and Manski.  This seem the closest to inside information so maybe this predicts an econometrics prize. My personal favorite is Robert Wilson.  I have always loved his research and leadership.  I also played Diplomacy at his home when I attended SITE once during grad school.  This makes him my intellectual and sentimental choice.   A prize for him perhaps joint with Kreps, Holmstrom and/or Milgrom would be great.  If that does not work out, if Bob Weber and I split the prize, that would be fine with me.