You are currently browsing jeff’s articles.
It’s pretty old, but worth reading given his new book.
Trivers has been teaching himself things and then growing bored with them his whole life. In 1956, when he was 13 and living in Berlin (his father was posted there by the State Department), he taught himself all of calculus in about three months. Around the same time, and with more modest success, Trivers-a skinny child picked on by bullies-tried to learn how to box, doing push-ups and covertly reading Joe Louis’s ”How to Box” in the school library.
Akubra Cadabra: Tobias Schmidt.
As reported on the Planet Money blog:
It sounds ridiculous today. But not so long ago, the prospect of a debt-free U.S. was seen as a real possibility with the potential to upset the global financial system. We recently obtained the report through a Freedom of Information Act Request. You can read the whole thing here. (It’s a PDF.)
The problem? No debt means no T-bonds. Without T-bonds what happens to all the financial instruments linked to T-bond yields? The white paper was co-authored by my old Berkeley mate Jason Seligman.
An auctioneer is never tempted to employ a shill bidder.
To be sure, he might want to make the winning bidder pay a higher price and using a shill bidder is one way to make that happen. For example, in an English auction the seller could shill bid until the price reaches a point where all but one bidders have dropped out. That price is the highest revenue he would have earned without shill bidding, and by shilling a little bit longer before finally dropping out, the seller could try to extract something more.
Of course, this comes at some risk for the seller because there is a chance that the high bidder will drop out before the shill bidder does and then the seller misses out on a sale. Nevertheless, a shill bidder pays off on average if the seller thinks that this small-probability loss is outweighed by the large-probability gain.
Nevertheless, the seller would never be tempted to do this.
The reason is that he could achieve exactly the same thing using reserve price. Before the auction even begins he can ask himself what he would want to do if the price rose to that level. If he decided that he would want to use a shill bidder to raise the price even further then he could bring about exactly the same effect by setting his reserve price at the desired level.
That is, a shill bidder is just a reserve price in disguise.
(ps, you don’t have to get very fancy to see why this is wrong.)
Economics is a unique discipline in that the technical ideas have to be explainable to regular people. Of course, the ideas in economics are not as technical as phyiscs, but there is essentially nothing at stake in being able to explain Maxwell’s Demon to a lay person, although that is certainly a talent.
And most disciplines that must be explained to the great un-unwashed are not technical enough for that to be any challenge.
Hand-in-hand with that distinctive feature is the fact that economic ideas are about things that regular people already have opinions on (usually strong ones) [how’s that for un-dangling a preposition!] To be able to have a useful dialog with them requires that you understand their opinions and most importantly why they have them.
Because strong opinions don’t come from nowhere. There is always some logic behind them.
So to be a good economist you must be familiar with, and see the logic behind, wrong ideas. This requires you to be sufficiently dumb. Because really really smart people would never have entertained those ideas and they will find them completely foreign and not worthy of consideration alongside the correct logic.
On the other hand you do have to be sufficiently smart to know why the logic is incomplete. But that by itself is usually not so demanding.
What is demanding is to be sufficiently dumb and sufficiently smart to be able to do both, AND also to be able to explain to BOTH the regular people and the smart people the other side.
Finally, I would add that being sufficiently dumb is crucial for finding good research projects. It has to do with the elusive “non-obvious” ideas. in practice “non-obvious” means “not obvious to the regular guy.” To spot those projects you need to have a replica of the same mental infrastructure that the regular guy is equipped with.
I do most of my reading through Google Reader, and when I get an idea for the blog I post it to Google Buzz. The small number of followers I have there will sometimes comment and/or vote for ideas that pop up there. When its time to write something for the blog, I go back there for ideas.
Buzz will soon be retired by Google and Reader is going to be crippled. That’s going to affect me. For one thing, I won’t have access to Google Reader feeds from many of my favorite curators, most notably Courtney Conklin Knapp, the soy sauce of the internet. And I need a new place to pre-test my ideas. (I used to think that I would use Twitter for that, but my Twitter identity has become overrun with tweets like “I went back in time so I could be the first person to write about the paradoxes of time travel.”)
Google says that the retired services are to be replaced by Google+ so I am switching to Google+. In fact I have been using it for a while now, jotting down some ideas and getting feedback before posting them here. Google+ is more of a “social” social network than Buzz so instead of just dumping links and incomprehensible notes-to-self, I am writing little rough drafts. It works well. Writing doesn’t come easy for me but for some reason when I know that what I am writing is verifiably a rough draft, I loosen up a bit and it comes easier. The feedback is great too.
(One potential downside is that I write something wrong, people on G+ point out that its wrong and I am too embarrassed to post it here. You learn a lot from wrong ideas so that would be a loss.)
So if you are on G+ I hope to see you there, and I would be happy to get your feedback.
By the way, know any good blogs? It seems like a large number that I subscribe to on Reader have gone dark so I am looking for some new ones. “What’s Hot” in Google Reader is already gone!
Advertisers want information about your tastes and habits so they can decide how much they are willing to pay to advertise at you. That information is stored by your web browser on your hard drive. Did you know that every time you access a web page you freely hand over that information to a broker who immediately sells it to advertisers who then immediately use it to bid for access to your eyeballs?
Here’s how it works. Internet vendors, say Amazon, pass information to you about your transactions and your browser stores them in the form of cookies. Later on, advertisers are alerted when you are accessing a web page and they compete in an auction for the ad space on that page. At that moment, unless you have disabled the passing of cookies, your browser is sending to potential advertisers all of the cookies stored on your hard drive that might contain relevant information about you.
However, many of the really valuable cookies are encrypted by the web site that put them there. For example, if Amazon encrypts its cookies then even though your browser gives them away for free, they are of no use to advertisers.
That is, unless the advertisers purchase the key with which to decrypt your cookies. And indeed Amazon will make money from your data by selling its keys to advertisers. It could sell them directly but it will probably prefer to sell them through an exchange where advertisers come to buy cookies by the jar.
The interesting thing about the market for cookies is that you are the owner of the asset and yet all of the returns are going to somebody else. And its not because your asset and mine are perfect substitutes. You are the monopolistic provider of information about you and when you arrive at a website it is you the advertisers are bidding for.
How long will it be before you get a seat at the exchange? Nothing stops you from putting a second layer of encryption over Amazon’s cookies and demanding that advertisers pay you for the key. Nothing stops me from paying you for exclusive ownership of your keys, cornering the market-in-you, and recouping the monopoly profit.
File under “Feel Free To Profit From This Idea.”
(I learned about the market for cookies from Susan Athey’s new paper and a post-seminar dinner conversation with her and Peter Klibanoff, Ricky Vohra, and Kane Sweeney.)
(Picture: Scale Up Machine Fail from http://www.f1me.net)
Via BoingBoing, why is the Indiana Election Commission putting cubes of styrofoam in their mailers?
The Styrofoam cube enclosed in this envelope is being included by the sender to meet a United States Postal Service regulation. This regulation requires a first class letter or flat using the Delivery or Signature Confirmation service to become a parcel and that it “is in a box or, if not in a box, is more than 3/4 of an inch thick at its thickest point.” The cube has no other purpose and may be disposed of upon opening this correspondence.
Assume that people like to have access to a community of people with similar habits, tastes, demographics, etc. A “community” is just a group of some minimal absolute size. Then the denser the population the more likely you will find enough people to form such a community.
But this effect is larger for people whose tastes, habits, and demographics are more idisyncratic than for people in the majority. Garden-variety people will find a community of garden-variety people just about anywhere they go. By contrast, if types of people are randomly distributed across locations, the density of cities makes it more likely that a community can be assembled there.
But that means that types won’t be just randomly distributed across locations. The unique types are willing to pay more to live in cities than the garden-variety types.
From the great blog Mind Hacks:
Because of this, the new study looked at volleyball where the players are separated by a net and play from different sides of the court. Additionally, players rotate position after every rally, meaning its more difficult to ‘clamp down’ on players from the opposing team if they seem to be doing well.
The research first established the belief in the ‘hot hand’ was common in volleyball players, coaches and fans, and then looked to see if scoring patterns support it – to see if scoring a point made a player more likely to score another.
It turns out that over half the players in Germany’s first-division volleyball league show the ‘hot hand’ effect – streaks of inspiration were common and points were not scored in an independent ‘coin toss’ manner.
Quoting The Angry Professor:
I need a gmail filter that sends a custom vacation message to certain people. The filter needs to scan the message for the latest date mentioned and determine the geographical location of the sender. It must next add four weeks to that latest date. The longitude of the sender’s geographic location must be advanced by 180o and the latitude multiplied by -1. After triangulating the dry land nearest to the rotated coordinates, the filter must finally send the following “vacation” message:
Thank you for your email. I am [on dry land nearest the point exactly half-way around the globe from you]. I will not have email contact until [computed date], but I will try to respond to your message as soon as I return.
Via Vinnie Bergl, here is a post which examines pitch sequences in Major League Baseball, looking for serial correlation in the pitch quality, i.e. fastball, changeup, curve, etc. The motivating puzzle is the typical baseball lore that. e.g. the changeup “sets up” the fastball. If that were true then the batter knows he is going to face a fastball next and this reduces the pitcher’s advantage. If the pitcher benefits from being unpredictable then there should be no serial correlation. The linked post gives a cursory look at the data which shows in fact the opposite of the conventional lore: changeups are followed by changeups.
There is a problem however with the simple analysis which groups together all pitch sequences from all pitchers. Not every pitcher throws a changeup. Conditional on the first pitch being a changeup, the probability increases that the next pitch will be a changeup simply because we learn from the first pitch that we are looking at a pitcher who has a changeup in his arsenal. To correct for this the analysis would have to be carried out at the individual level.
Should we expect serial independence? If the game was perfectly stationary, yes. But suppose that after throwing the first curveball the pitcher gets a better feel for the pitch and is temporarily better at throwing a curveball. If pitches were serially independent, then the batter would not update his beliefs about the next pitch, the curveball would have just as much surprise but now slightly more raw effectiveness. That would mean that the pitcher will certainly throw a curveball again.
That’s a contradiction so there cannot be serial independence. To find the new equilibrium we need to remember that as long as the pitcher is randomizing his pitch sequence, he must be indifferent among all pitches he throws with positive probability. So we need to offset the temporary advantage of a curveball this is achieved by the batter looking for a curveball. That can only happen in equilibrium if the pitcher is indeed more likely to throw a curveball.
Thus, positive serial correlation is to be expected. Now this ignores the batter’s temporary advantage in spotting the curveball. It may be that the surprise power of a breaking pitch is reduced when the batter gets an earlier read on the rotation. After seeing the first curveball he may know what to look for next and this may in fact make a subsequent curveball less effective, ceteris paribus. This model would then imply negative serial correlation: other pitches are temporarily more effective than the curveball so the batter should be expecting something else.
That would bring us back to the conventional account. But note that the route to “setting up the fastball” was not that it makes the fastball more effective in absolute terms, but that it makes it more effective in relative terms because the curveball has become temporarily less effective.
The latter hypothesis could be tested by the following comparison. Look at curveballs that end the at bat but not the inning. The next batter will not have had the advantage of seeing the curveball up close but the pitcher still has the advantage of having thrown one. We should see positive serial correlation here, that is the first pitch to the new batter should be more likely (than average) to be a curveball. If in the data we see negative correlation overall but positive correlation in this scenario then it is evidence of the batter-experience effect.
(Update: the Fangraphs blog has re-done the analysis at the individual level and it looks like the positive correlation survives. One might still worry about batter-specific fixed effects. Maybe certain batters are more vulnerable to the junk pitches and so the first junk pitch signals that we are looking at a confrontation with such a batter.)
The “Vivace” (vivacious) is a burger topped with bacon, salted spinach, marinated onions and mayonaise with mustard seeds.
The “Adagio” (slowly, like the musical term) is also a hamburger, topped with sweet-and-sour eggplant strips, sliced tomatoes and salted ricotta, all between a bun covered in sliced almonds.
There’s also a tiramisu.
via Arthur Robson:
While appeals often unmask shaky evidence, this was different. This time, a mathematical formula was thrown out of court. The footwear expert made what the judge believed were poor calculations about the likelihood of the match, compounded by a bad explanation of how he reached his opinion. The conviction was quashed.
And the judge ruled that Bayes’ law for conditional probabilities could not be used in court. Statisticians, Mathematicians, and prosecutors are worried that justice will suffer as a result. The statistical evidence centered around the likelihood of a coincidental match of shoeprint with shoes owned by the Defendant.
In the shoeprint murder case, for example, it meant figuring out the chance that the print at the crime scene came from the same pair of Nike trainers as those found at the suspect’s house, given how common those kinds of shoes are, the size of the shoe, how the sole had been worn down and any damage to it. Between 1996 and 2006, for example, Nike distributed 786,000 pairs of trainers. This might suggest a match doesn’t mean very much. But if you take into account that there are 1,200 different sole patterns of Nike trainers and around 42 million pairs of sports shoes sold every year, a matching pair becomes more significant.
Now if I can prove to jurors that there was one shoe in the basement and another shoe upstairs, then probably I can legitimately claim to have proven that the total number of shoes is two because the laws of arithmetic should be binding on the jurors deductions. And if there is a chance that a juror comes to some different conclusion then it would make sense for an expert witness, or the judge even, tell the juror that he is making a mistake. Indeed a courtroom demonstration could prove the juror wrong.
But do the “laws” of probability have the same status? If I can prove to the juror that his prior should attach probability p to A and probability q to [A and B], and if the evidence proves that A is true, should he then be required to attach probability q/p to B? Suppose for example that a juror disagreed with this conclusion. Could he be proven wrong? A courtroom demonstration could show something about relative frequencies, but the juror could dispute that these have anything to do with probabilities.
It appears though that the judge’s ruling in this case was not on the basis of bayesian/frequentist philosophy, but rather about the validity of a Bayesian prescription when the prior itself is subjective.
The judge complained that he couldn’t say exactly how many of one particular type of Nike trainer there are in the country. National sales figures for sports shoes are just rough estimates.
And so he decided that Bayes’ theorem shouldn’t again be used unless the underlying statistics are “firm”. The decision could affect drug traces and fibre-matching from clothes, as well as footwear evidence, although not DNA.
This is a reasonable judgment even if the court upholds Bayesian logic per se. Because the prior probability of a second pair of matching shoes can be deduced from the sales figures only under some assumptions about the distribution of shoes with various tread patterns. The expert witnesses probably assumed that the accused and a hypothetical third-party murderer were randomly assigned tread patterns on their Nikes and that these assignments were independent. But if the two live in the same town and shop at the same shoe store and if that store sold shoes with the same tread pattern, then that assumption would significantly understate the probability of a match.
Via Tyler Cowen, a quote from Daniel Kahneman on why a sandwich made by someone else tastes better.
When you make your own sandwich, you anticipate its taste as you’re working on it. And when you think of a particular food for a while, you become less hungry for it later. Researchers at Carnegie Mellon University, for example, found that imagining eating M&Ms makes you eat fewer of them. It’s a kind of specific satiation, just as most people find room for dessert when they couldn’t have another bite of their steak. The sandwich that another person prepares is not “preconsumed” in the same way.
Put aside the selection effect that conditional on a person making a sandwich for you, it is likely that they are a better sandwich maker than you. Even in a randomized sandwich trial the effect would be there but I have a different theory why:
A large part of tasting is actually smelling. You can verify this by, for example, eating an onion with your nose plugged. Our sense of smell tends to filter out persistent smells after being exposed to them for awhile so that we cannot smell them anymore. This means that when you are cooking in the kitchen, surrounded by the aromas of your food, you are quickly de-sensitised to them. Then when you sit down to eat, it is like tasting without smelling.
When your spouse has done the cooking you were likely in another room, isolated from the aromas. When you walk into the kitchen to eat, you get to smell and taste the food at the same time. That’s why it tastes better to you. The same idea applies to leftovers. It takes much less time to reheat leftovers than it took to prepare the food in the first place so you retain sensitivity to more of the aromas when it comes time to eat.
While I think my work sometimes serves important purposes, and that (sadly) I am probably better at blogging and running regressions than I am at more direct forms of assistance, perhaps some deeper reflection is in order.
Of course, what I actually do is say to myself, “well, at least I don’t work in finance.”
The brief moment of concern passes, and I turn back to dispassionately regressing death and destruction.
The number of laws grows rapidly, yet the number of regulators grows relatively slowly. There are always more laws than there are regulators to enforce them, and thus the number of regulators is the binding constraint.
The regulators face pressure to enforce the most recently issued directives, if only to avoid being fired or to limit bad publicity. On any given day, it is what they are told to do. Issuing new regulations therefore displaces the enforcement of old ones.
One rejoinder would begin by observing that the origin of the problem is that future legislators are short-run players. Given that, it may even be normatively optimal for today’s short-run legislators to speed up the pace of their own regulations so that they are in effect as long as possible before their eventual displacement by the next generation. Of course this is conditional on today’s regulation being better than the marginal old one being displaced, which is presumably the case otherwise it wouldn’t have been under consideration in the first place.
You and your spouse plan your lifetime household consumption collectively. This is complicated because you have different discount factors. Your wife is patient, her discount factor is .8; you are not so patient, your discount factor is .5. But you are a utilitarian household so you resolve your conflicts by maximizing the total household utility.
Leeat Yariv and Matt Jackson show in this cool paper that your household necessarily violates a basic postulate of rationality: your household preferences are not time consistent. For example, consider how you rank the following two streams of household consumption:
- (0,10,0,0, …)
- (0,0,15,0,0, …)
Each of you evaluates the first plan by computing the present value of 10 units of consumption one period from now. Total household utility for the first plan is the sum of your two utilities, i.e. For the second plan you each discount the total consumption of 15 two periods from now. Total utility for the second plan is
Your utilitiarian household prefers the second plan.
But now consider what happens when you actually reach date 1 and you re-consider your plan. Now the total utilities are for the first plan (since it is date 1 and you will each consume the 10 immediately if you choose the first plan) and
for the second plan. Your household preference has reversed.
Indeed your household exhibits a present bias: present consumption looms large in your household preferences, so much so that you cannot forego consumption that, earlier on, you were planning to delay in exchange for a greater later reward.
Jackson and Yariv show that this example is perfectly general. If a group of individuals is trying to aggregate their conflicting time preferences, and if that group insists on a rule that respects unanimous preferences and is not dictatorial, then it must be time inconsistent.
Do you know about the Nemmers Prize in Economics? It is a prestigious prize awarded biennially in recognition of “major contributions to new knowledge or the development of significant new modes of analysis.” It comes with a significant monetary award funded originally via a gift from the Nemmers brothers to Northwestern University. (There are also Nemmers Prizes in Mathematics and Music Composition.)
The Nemmers wanted the prize to eventually carry a degree prestige that would rival the Nobel Prizes. So far 9 prizes have been awarded and 5 of those recipients have gone on to win Nobel Prizes. (The Nemmers charter forbids giving the award to a previous Nobel Laureate.) They are
- Peter Diamond (Nemmers 1994 Nobel 2010)
- Tom Sargent (Nemmers 1996 Nobel 2011)
- Bob Aumann (Nemmers 1998 Nobel 2005)
- Dan McFadden (Nemmers 2000 Nobel later that same year)
- Ed Prescott (Nemmers 2002 Nobel 2004)
Indeed each of the first 5 Nemmers winners later won Nobels. The next four, Ariel Rubinstein, Lars Hansen, Paul Milgrom, and Elhanan Helpman are perennial favorites in department pools and polls.
If you had placed your Nobel bets over the last decade based on the Nemmers record you would have made some money.
My sister-in-law asked me how many new PhDs in economics find jobs in academia (as opposed to taking private sector jobs.) I said “More than half.” Her reply surprised me, for a moment. She said “Really, that few?”
I was surprised because my answer gave her only a lower bound. “More than half” could easily mean “100%.” But after a moment I realized that my sister-in-law is very sophisticated and her response made perfect sense.
I have now seen this paper presented twice and I really like it. It’s Gul, Pesendorfer, and Strzalecki modeling the implications of limited attention on asset prices. They show how competitive equilibrium requires large fluctuations in prices in extreme states of productivity.
Their model is very simple and the logic can be explained in a few sentences. Output in the economy is stochastic so that there are high productivity and low productivity states. There is one simple behavioral assumption: each agent is limited in his cognition, (or attention, or flexibility) so that he must partition states into a small number of categories. His limitation is modeled by a constraint that his consumption must be the same in all states belonging to the same category.
Qualitatively, the results of the paper follow almost immediately now. Consider the very small probability event that productivity is at the extreme low end. Agents will lump these states together with other states and so they will not be able to reduce their consumption in response to the very low output. So in order for the market to clear there must be some agent, or small set of agents, who do pay attention to these states and reduce their consumption exactly when output is this low.
Think about these agents. They are committing their scarce resource, namely attention, to this rare event. Its very unlikely that this use of their attention will pay off. In order to give them enough incentive to do this, the reward must be very large. And the way to reward them is to make the price extremely high in these low productivity states. The ability to sell at these extreme high prices is the necessary incentive.
A similar logic explains why prices must be extremely low in high productivity states. Overall there are large price fluctuations, larger than in a standard economy without the need for these incentives.
The paper is a beautiful example of the value of abstraction. Rather than getting into details about how complexity/attention constraints actually work, it is enough to model their essential implication, namely the partition. This keeps the canvas clean for the economic logic to stand out.
There is one conceptual point that I haven’t been able to come to terms with. It has to do with feasibility. In competitive equilibrium, feasibility–the assumption that total consumption equals the total endowment– is just a way of modeling market clearing. And market clearing is the essential assumption of competitive equilibrium.
If markets didn’t clear then there would be excess demand and supply and the resulting competitive pressures would cause prices to adjust so that market clearing was restored. That’s the usual story behind competitive equilibrium. But it doesn’t work here, at least not in the usual way.
For example, suppose that nobody was paying attention to the lowest state. All traders are grouping it with higher productivity states and so they are planning to consume more than the total output in this lowest state. There will be excess demand. That can’t be a competitive equilibrium, therefore someone must be paying attention to the lowest state.
But notice we cannot explain this “equilibration” by the textbook story of how prices adjust to clear markets. No matter how much the price adjusts, since nobody is paying attention to this lowest state, nobody can change their behavior in response to changes in prices.
Instead, the story has to work something like this. If nobody is paying attention to the lowest state, the price in that lowest state has to rise so that somebody starts paying attention to it. That is, it’s as if there is some ex ante stage where everybody is paying attention to every state, and on the basis of all that information they decide which states to then stop paying attention to.
Probably this is taking the model too literally and there is an as if interpretation that doesn’t sound so convoluted. I am still trying to find that.
January of 1996 I was on the junior job market and I had just finished giving a recruiting seminar at the University of Chicago. This was everything a job market seminar at Chicago was supposed to be. I barely made it through the first slide, I spent the rest of the 90 minutes moderating a debate among the people in the audience, and this particular debate was punctuated by Bob Lucas shouting “Will you shut up Derek? Contract theory has not produced a single useful insight.”
Suffice it to say that my job market paper had nothing in common with either contract theory, Bob Lucas, or the Derek in question. But it was the most fun I have ever had in a seminar.
So I was going out to dinner with Tom Sargent and Peyton Young. Peyton was visiting the Harris school for the quarter and we would be going in his car to the restaurant. Actually it was his mother’s car because his mother lived in the area and he was borrowing it while he was visiting. It was one of those Plymouth Satellite or Dodge Dart kinda cars: a long steel plinth on wheels. Peyton warned us that it hadn’t been driven much in recent years and it had just gotten really cold in Chicago so there was some uncertainty whether it would actually start.
We got in with me on the passenger side and Sargent in the back seat and sure enough the car wasn’t going to start. It was making a good effort, the battery was strong and the starter was cranking away but the engine just wouldn’t turn over. After a while Tom says let him have a crack at it. I am sitting there freezing my never-been-out-of-California butt off thinking that this is the comical extension of the surreal seminar experience I just went through. First I had to play guest emcee while they hashed out their unfinished lunchtime arguments, and now I am going to have to get out and push the car through the snow.
But when Sargent got into the front seat there was this look on his face. I know these American cars, he says, you gotta work with them. He leans forward to put his ear close to the dashboard, he’s got the ignition in his right hand and his left hand looped around the steering column holding the gear shift. And then he goes to work. He turns the key and starts wiggling the gear shift while he is pumping the gas pedal. This makes the car emit some strange sounds but apart from that it doesn’t accomplish much and he starts over. He’s mumbling something under his breath about engine flooding, his head is bobbing manically and his eyes are folding down giving the effect of a cross between Doc Brown and Yoda. In the back seat Peyton appears to have total faith in this guy’s command of the machine, meanwhile I am about to start laughing out loud.
After three or four more cycles, he starts ramping up the body English. He is bouncing off the seat to get extra leverage on the gear shift. His ear is right up against the steering wheel, his eyes are shut and from the look on his face you would assume he was straining to heed the car’s wheezing, last dying wish. But then there is a different sound. The dry electric sound of the starter motor starts to give way to the deep hum of internal combustion. The car begins to bounce along with Professor Tom Sargent. Bounce, bounce, bounce, vrum –ayngayngayng– vrum ayng vrum -vrum -vrum, BANG. That backfire knocks me out of my seat, but it just gently opens Sargent’s eyes and Peyton’s look is pretending that he saw all of this coming.
Sargent turns back the ignition, pauses and draws his face back away from the wheel. His head turns toward me and a grin comes over his face. He’s saying here it goes, watch this. He turns the key one last time and the engine rolls over like a cat, stretching out its neck for one more scratch.
“You don’t mind if I drive do you Peyton?”
On Monday, me and some dudes are gonna tailgate outside the Kellogg School of Management before the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel is announced. You should totally come. It’s gonna be ill. My pick to click this year is N. Gregory Mankiw. They’re gonna say it’s for his work on menu costs and price stickiness, but that’s bunk. It’s really so they can hand it over to someone who isn’t Paul Krugman.
Who’s on your Nobel fantasy team?
In related news, Harvard has shut down its Nobel pool (and Al Roth plans on a late breakfast.)
The Mexico City Assembly is considering a measure which would enable marrying couples to specify a fixed, finite duration for the marriage contract.
The minimum marriage contract would be for two years and could be renewed if the couple stays happy. The contracts would include provisions on how children and property would be handled if the couple splits.
“The proposal is, when the two-year period is up, if the relationship is not stable or harmonious, the contract simply ends,” said Leonel Luna, the Mexico City assemblyman who co-authored the bill.
I wonder if they considered the various other margins along which to move to an interior solution. We could be married forever but only on Thursdays. Or if you are not yet ready to marry my I can still incentivize you to invest in me by writing you an option to marry me in the future. Or I can go public, issuing matrimony shares. My commitment to you is proportional to your ownership stake.
The NPR blog Planet Money is asking you to guess a number:
This is a guessing game. To play, pick a number between 0 and 100. The goal is to pick the number that’s closest to half the average of all guesses.
So, for example, if the average of all guesses were 80, the winning number would be 40.
The game will close at 11:59 p.m. Eastern time on Monday, October 10. We’ll announce the winner — and explain why we’re doing this — on Tuesday, October 11.
This is a famous game that has been used in numerous experiments investigating whether real people are as rational as game theory and economic theory assumes they are. Powerful logic suggests that you should guess the number zero:
- For sure the average will be no greater than 100 so half the average will be no greater than 50.
- Anybody who is smart enough to figure this out will guess something no greater than 50 so the average will be no greater than 50 and half the average will be no greater than 25.
- Anybody who is smart enough to figure this out will guess something no greater than 25, etc.
Of course time after time in experiments the actual guesses are very far from zero, demonstrating that people are in fact less rational than economic theory assumes.
Planet Money, however is an intelligent blog and when they analyze the results of their experiment, they won’t jump to that conclusion. They will be insightful enough to see past the straw man.
It all starts at point 2. It is true that people who are smart enough to figure out point 1 will guess something no greater than 50, but almost all of those people are also smart enough to know that there is a sizeable proportion of people who are not that smart. And thus these smart people, if they are rational, will not deduce in point 2 that the average will be no greater than 50. The induction will not take them past point 2.
In fact, some of the smartest and most rational people in the world, professional chess players, guess numbers around 23 when they play these experiments. (To be precise, the chess players were playing a version of the Beauty Contest were you are supposed to guess 2/3 of the average. Their guesses would be somewhat lower in the Planet Money version, see below.) And that is because if someone is indeed as rational as game theory and economic theory assumes she is, and also she is smart enough to know that
- Not everybody is that rational,
- Most of the rational people know that not everybody is that rational,
- Most of the rational people know that most (but not all) of the rational people know that not everybody is that rational
etc., then she will never choose anything close to zero. Indeed, according to my calculations, the ultra-rational guess in the Planet Money Beauty Contest is about 16. Here is how I came up with that number.
I think that
- About 2/5 of the Planet Money readers will be confused by the rules of the game and guess 50.
- Another 3/10 will be smart enough to know that the rational thing to do is to guess something less than 50, and reasoning as in the straw-man argument they will guess 25.
- The remaining 3/10 of the population are the really smart ones.
The roses in your garden are dead and your gardener tells you that there are bugs that have to be killed if you want the next generation of roses to survive. So you pay him to plant new roses and spray poison to keep the bugs away.
Each week he comes back and tells you that the bugs are still threatening to kill the roses and you will need to pay him again to spray the poison to keep them away. This goes on and on. At what point do you stop paying him to spray poison on your roses?
Keep in mind that if there really are bugs waiting to take over once the poison is gone, you are going to lose your roses if you stop spraying. So you are taking a big risk if you stop. On the other hand, only he really knows for sure if the bugs are threatening, you are just taking his word for it.
Now add to that the possibility that the poison is not guaranteed. You may have an infestation even in a week where he sprays. Of course this only happens if the bugs are a threat. If you spray for many weeks and you see no infestation this is a pretty good sign that the bugs are not a threat at all.
If you do stop spraying at some point, on what basis do you make that decision? Assuming he is spraying vigilantly you would optimally stop after many weeks of no infestation. You would continue for sure if one week the bugs return even though he was spraying.
But you don’t know for sure that he is actually spraying. You are paying him to do it, but you are taking his word for it that he is actually spraying. If you assume that he is doing his job and spraying vigilantly, and you therefore follow the decision rule above, and if we wants to keep his job then he won’t be spraying vigilantly after all.
So what do you do?

Consider an infinite-horizon decision problem consisting of a sequence of beats. Each beat is divided into two eighth notes and you have to decide when to play them. If you have standard exponential discounting you will evenly space your beats through time. You will play a classical rhythm. But if you are what behavioral economists call a hyperbolic discounter and you have present bias, you will procrastinate the first eighth note. But then in order to complete the beat you will need to play the second eighth note in quick succession. This pattern will repeat through time. You will play a swing rhythm.
- I can’t wait for the Rick Perry Presidency.
- The most surreal thing about this 1958 Mike Wallace interview with Salvador Dali is the advertisement for Parliament cigarettes at the beginning.
- The last Turing test: appropriately appending “That’s what she said.”
- Various lies.
- The McGurk effect.
- Economics abstracts in haiku form.
- Sit on a spade fuyuh.
- Fall on a flaming can of Raid fuyuh.
- Reach into the garbage disposal to save a hastily discarded tapenade fuyuh.

