You are currently browsing jeff’s articles.
In a paper published in the Journal of Quantitative Analysis in Sports; Larsen, Price, and Wolfers demonstrate a profitable betting strategy based on the slight statistical advantage of teams whose racial composition matches that of the referees.
We find that in games where the majority of the officials are white, betting on the team expected to have more minutes played by white players always leads to more than a 50% chance of beating the spread. The probability of beating the spread increases as the racial gap between the two teams widens such that, in games with three white referees, a team whose fraction of minutes played by white players is more than 30 percentage points greater than their opponent will beat the spread 57% of the time.
The methodology of the paper leaves some lingering doubt however because the analysis is retrospective and only some of the tested strategies wind up being profitable. A more convincing way to do a study like this is to first make a public announcement that you are doing a test and, using the method discussed in the comments here, secretly document what the test is. Then implement the betting strategy and announce the results, revealing the secret announcement.
I arrive there this evening, hungry.
Navy Captain Owen Honors was relieved of his command of The USS Enterprise. This is the guy behind the viral videos that made the news this week.
I want to blog about the news coverage of the firing. For example, this Yahoo! News article has the headline “Navy Firing Over Videos Raises Questions Of Timing.” Here is the opening paragraph:
The Navy brusquely fired the captain of the USSEnterprise on Tuesday, more than three years after he made lewd videos to boost morale for his crew, timing that put the military under pressure to explain why it acted only after the videos became public.
Two observations:
- Sadly, it does make perfect sense to respond to his firing now by complaining that he wasn’t fired earlier. (And to complain less if he wasn’t fired at all.) The firing now reveals that his behavior crosses some line that the Navy has private information about. Now that we know he crossed that line we have good reason to ask why he wasn’t punished earlier.
- Obviously that fact implies that it is especially difficult for the Navy to fire him now, even if they think he deserves to be fired.
The more general lesson is that there is tragically too little reward for changing your mind due to social forces that are perfectly rational and robust. The argument that a mind-changer is someone who recognizes his own mistakes and is mature enough to reverse course cannot win over the label that he is a “waffler” or other pejorative. And the force is especially strong when it comes to picking a leader.

It’s looney to celebrate New Years, New Millenia, etc. Every day counts equally in the march of time. Just by arbitrary historical accident one of those days is called the first day of the year.
But it occurred to me when I wrote the post about the rise in stock prices last month that there is social value from coordinating our focus on arbitrary milestone days. If someone presents statistics to you about the behavior of some variable over the course of a year, which would be more meaningful?
- Stocks rose x% from July 9 2010 to July 9 2011.
- Stocks rose x% from Jan 1 2010 to Jan 1 2011.
Subjectively, it is more likely that the dates for the first range were cherrypicked by the statistician to generate the conclusion. Restricting attention to dates that have significance “outside the model” makes the exhibit more credible.
Of this (via MR):
It’s an auction conducted at the airport terminal. In this auction you are a seller and you are bidding to sell your ticket back to the airline.
Optimists look at this and contemplate the efficiency gains: this is a mechanism for appropriately allocating scarce space on the plane. Pessimists detect a nasty incentive: now that the lowest bidder can be bought off the plane the airline has a stronger incentive to overbook.
The pessimists are right precisely because the optimists are right too.
Consider standard airline pricing with no overbooking. You buy a ticket in advance for a flight next month. Lots of uncertain details are resolved between now and then which determine your actual willingness to pay to fly on the departure date. One month in advance you can only form an expectation of this and that expected value is your willingness to pay for a seat in advance.
This is inefficient. Because, after the realization of uncertainty it could be that your value for flying is lower than somebody else who didn’t buy a ticket. Efficiency dictates that you should sell your ticket to him on the day of the flight.
One way to implement this is to hold an auction on the day of departure. Put aside the issue that flyers want advance booking for planning reasons. Even without that incentive, just-in-time auctions solve the inefficiency problem with conventional pricing but airlines would never use them.
The reason is that an auction leaves bidders with consumer surplus (or in the parlance of information economics, information rents.) As a simple example, suppose there is a single seat avaiable on the flight and two bidders are bidding for it. An optimal auction is (revenue-equivalent to) a second-price auction so that the winning bidder’s price is equal to the willingness to pay of the second-highest bidder. That is lower than the winner’s willingness to pay and the difference is his consumer’s surplus.
The airline would like to achieve the efficient allocation without leaving you this consumer’s surplus. That is impossible in a spot-auction because the airline can never know exactly how much you are willing to pay and charge you that.
But a hybrid pricing mechanism can implement the efficient allocation and capture all the surplus it generates. And this hybrid pricing mechanism entails overbooking followed by a departure-day auction to sell back excess tickets.
The basic idea is standard information economics. The reason you get your information rents in the spot auction is that you have an informational advantage: only you know your realized willingness to pay. To remove that informational advantage the airline can charge you an entrance fee to participate in the auction before your willingness to pay is realized, i.e. a month in advance as in conventional pricing.
Here is how the scheme works in the simple example. There is one seat available. Instead of selling that single seat to a single passenger, the airline sells two tickets. Then, on the day of departure an auction is held to sell back one ticket to the airline. The person who “wins” this auction and makes the sale will be the person with the lowest realized value for flying. The other person keeps their ticket and flies. On auction day, the winner gets some surplus: the price he will receive is the willingness to pay of the other guy which is by definition higher than his own. (Delta is apparently using a first-price auction, but by revenue equivalence the surplus is the same.)
But in order to get the opportunity to compete in this auction you have to buy a ticket a month in advance. And at that time you don’t know whether you are going to win the auction or fly. The best you can do is calculate your expected surplus from participating in that auction and you are willing to pay the airline that much to buy a ticket. Your ticket is really your entrance pass to the auction. And the price of that ticket will be set to extract all of your expected surplus.
Note that the only way that the airline can achieve these efficiency gains and the accompanying increase in profits is by overbooking at the stage of ticketing. So the pessimists are right.
(You can write down a literal model of all of the above. The conclusion that all of your surplus is extracted would follow if travelers were ex ante symmetric: they all have the same expected willingness to pay at the time of ticketing. But the general conclusion doesn’t require this: all of the efficiency gains from adding a departure-day sellback auction will be expropriated by the airline. That follows from a beautiful paper by Eso and Szentes. To the extent that fliers retain some consumer surplus it is due to ex ante differences in expected willingness to pay. The two fliers with the highest expected surplus will buy tickets at a price equal to the third-highest expected surplus. This consumer surplus is already present in conventional pricing.)
You did take my advice didn’t you? If you did, then because of the January effect, you bought the S&P500 at 1180.55 on November 30 and sold it on the first trading day of the new year, yesterday, at a price of 1271.87 and made a 7.5% return in a single month.
“Bob, the professor business is even sleazier than the jewelry business. At least in the jewelry business we were honest about being fake. Plus, when I go to conferences, I’ve never seen such pretentiousness. These are the most precious people I’ve ever met.”
“Come on, Clancy. Did you really think people were going to be any better in a university?”
“Um, kind of.” Of course I did. “And it’s not that they’re not better. They’re worse.”
“Well, you may have a point there.” (Bob was always very tough on the profession of being a professor.) “Focus on the students and your writing. The rest of it is b.s.” (That was a favorite expression of Bob’s, as it is of a former colleague of his at Princeton, Harry Frankfurt.)
“With the students, I still feel like I’m selling.” (I was very worried about this.)
“You are selling. That’s part of what it is to be a good teacher.” (Bob was in the university’s Academy of Distinguished Teachers and had won every teaching award in the book. He also made several series of tapes for the Teaching Company.) “To be a good teacher, you have to be part stand-up comic, part door-to-door salesman, part expert, part counselor. Do what feels natural. Be yourself. Are your students liking it? Is it working for you?”
The story of a guy who dropped out, trained himself as a liar in the jewelry business, and then went back to academia. (Montera missive: kottke.org)
I don’t mean breaking and entering. It’s New Years Eve — 2PM on New Years Eve — and after heading out for a quick lunch I return to find The Jacobs Center locked for the weekend. There is a separate electronic key to the building and I have one somewhere but I never need it so I don’t carry it around with me. So I have to stand in the cold and wait for somebody to enter or exit the building and let me in.
There are two entrances so the question is which one to stand by and wait. I wait for a while at the main entrance and then decide to try my luck at the next one on the other side of the building, about a 2 minute walk. Of course on the way I am imagining that someone must be leaving from the first entrance just as it passed out of sight. When I get to the other entrance I find that there’s just as little activity there as at the first one. After a while I give up again and go back to the first.
I have a sinking feeling as I am walking back that I am violating some basic rationality postulate to have dropped the first alternative only to switch back to it again. But it’s not hard to rationalize switching, even indefinite switching with a simple model of uncertain arrival rates.
At each entrance there is a random arrival process, say Poisson, which produces a comer or goer with some given flow rate. It’s random so even if the arrivals are frequent on average its still possible that there is a long wait just because of bad luck. Because it’s an unusual day I don’t know for sure what the arrival rates are at the two entrances so the best I can do is form a subjective distribution.
As time passes I learn only about the door I am watching and what I am learning is that the arrival rate is slower than I thought. Every moment that passes and I am still out in the cold the current door’s expected arrival rate is continuously dropping. There comes a point in time when it drops low enough that I want to switch to the other door. The expected arrival rate at the other door hasn’t changed becuase I haven’t learned anything about it. I give up and walk to the other door once the estimated rate at the current door drops far enough below that it is worth 2 minutes of walking (and no chance of getting in during that time.) In fact, this may happen before the current door’s expected arrival rate drops below that of the other door. (Due to option value. See below.)
Once at the other door I start to learn about it and I stop learning about the first door. Again, as time passes its estimated arrival rate drops while that of the first door remains constant. There is again another threshold after which I return to the first. Etc. Until I finally give up and throw a brick through the Kellogg student lounge window.
Observation: Consider the threshold at which I switch from door 1 to door 2. That is based on a comparison of the value of staying put versus the value of switching. The value of switching has built into it the option value of being able to switch back. You can see the role of this option value by considering a truncated problem where once I switch doors I am unable to switch back. Relative to that problem, the option of switching back makes me switch more frequently. Because without the option to switch back, I want to hold on to the current option until I am certain that it’s a loser before giving it up for good.
Made it to Brooklyn alive. I don’t see what the big deal is, some nice chap shoveled me a spot and even gave me a free chair!
From @TheWordAt.
Speaking of which, have you noticed the similarity between shovel-earned parking dibs and intellectual property law? In both cases the incentive to create value is in-kind: you get monopoly power over your creation. The theory is that you should be rewarded in proportion to the value of the thing you create. It’s impossible to objectively measure that and compensate you with cash so an elegant second-best solution is to just give it to you.
At least in theory. But in both IP and parking dibs there is no way to net out the private benefit you would have earned anyway even in the absence of protection. (Aren’t most people shoveling spaces because otherwise they wouldn’t have any place to put their car in the first instance? Isn’t that already enough incentive?) And all of the social benefits are squandered anyway due to fighting ex post over property rights.

I wonder how many people who save parking spaces with chairs are also software/music pirates?
Finally, here is a free, open-source Industrial Organization textbook (dcd: marciano.) This guy did a lot of digging and we all get to recline in his chair.
Amazon has patented a way to let you return gifts before you even receive them.
Amazon’s innovation, not ready for this Christmas season, includes an option to “Convert all gifts from Aunt Mildred,” the patent says. “For example, the user may specify such a rule because the user believes that this potential sender has different tastes than the user.” In other words, the consumer could keep an online list of lousy gift-givers whose choices would be vetted before anything ships.
The benefit to the receiver is clear. The benefit to Amazon is even bigger:
The proposal has also brought into focus a very costly part of the e-retailing business model: Up to 30 percent of purchases are returned, and the cost of getting rejected gifts back across the country and onto shelves has online retailers scrambling for ways to reduce these expenses.
To the giver? Think of it as weakly dominating a gift card. It’s a gift card with a default. If gifts are better that gift cards because they allow you to show the recipient something they never would have found/considered on their own, then this system achieves that without the risk of it going badly. Perhaps that allows you to take even more risks with your gifts. Not everyone is happy though.
“This idea totally misses the spirit of gift giving,” Post said. “The point of gift giving is to allow someone else to go through that action of buying something for us. Otherwise, giving a gift just becomes another one of the world’s transactions.”
Amazon’s system gives users a “Gift Conversion Wizard” through which they can program various rules like “no gifts made of wool” or “Convert any gift from Aunt Mildred to a gift certificate, but only after checking with me.” But what will the giver be told?
Most cleverly – or deviously, depending on your attitude toward this sort of manipulation – the gift giver will be none the wiser: “The user may also be provided with the option of sending a thank you note for the original gift,” according to the patent, “even though the original gift is converted.” (Alternatively, a recipient could choose to let the giver know he has exchanged the item for something else.)
Casquette cast: Courtney Conklin Knapp.
Jonathan Weinstein does a very good Dickens. A fun read.
“There are many other purposes of charity, Uncle, but at the risk of my immortal soul, I shall debate you on your own coldhearted terms. Your logic concerning gifts appears infallible, but you have made what my dear old professor of economic philosophy would call an implicit assumption, and a most unwarranted one.”
You can find it here, thanks to a reader Elisa for hunting it down. The core is paragraphs 43-112 (starting on page 27) which lay out the new rules. I will give some excerpts and my own commentary.
The regulations break down into 4 categories: transparency, no blocking, no unreasonable discrimination, and reasonable network management. Transparency is what it sounds like: providers are required to maintain and make available data on how they are managing their networks. The blocking and discrimination rules are the most important and the ones I will focus on.
No Blocking.
A person engaged in the provision of fixed broadband Internet access service, insofar as such person is so engaged, shall not block lawful content, applications, services, or non- harmful devices, subject to reasonable network management. (paragraph 63)
This is the clearest statement in the entire document. (Many phrases are qualified by the “reasonable network management” toss-off. In the abstract that could be a troubling grey area, but it is pretty well clarified in later sections and appears to be mostly benign, although see one exception I discuss below.) The no-blocking rule is elaborated in various ways: providers cannot restrict users from connecting compatible devices to the network, degrading particular content or devices is equivalent to blocking and not permitted, and especially noteworthy:
Some concerns have been expressed that broadband providers may seek to charge edge providers simply for delivering traffic to or carrying traffic from the broadband provider’s end-user customers. To the extent that a content, application, or service provider could avoid being blocked only by paying a fee, charging such a fee would not be permissible under these rules. (paragraph 67)
No Unreasonable Discrimination
A person engaged in the provision of fixed broadband Internet access service, insofar as such person is so engaged, shall not unreasonably discriminate in transmitting lawful network traffic over a consumer’s broadband Internet access service. Reasonable network management shall not constitute unreasonable discrimination.
This rule is heavily qualified in the paragraphs that follow. Here is my framework for reading these. There are three typical ways a provider would discriminate: differentially pricing various services (i.e. you pay differently whether you are accessing Facebook or YouTube), differentially pricing by quantity (i.e. the first MB costs more or less than the last), or differentially pricing by bandwidth (i.e. holding fixed the quantity you pay more if you want it sent to you faster, for example by watching HD video.)
The rules seem to consider some of these forms of discrimination unreasonable but others reasonable. The clearest prohibition is against the first form of discrimination, by data type.
For a number of reasons, including those discussed above in Part II.B, a commercial arrangement between a broadband provider and a third party to directly or indirectly favor some traffic over other traffic in the broadband Internet access service connection to a subscriber of the broadband provider (i.e., “pay for priority”) would raise significant cause for concern. (paragraph 76)
Such a ban is clearly dictated by economic efficiency. The cost of transmitting a datagram is independent of the content it contains and therefore efficient pricing should treat all content equally on a per-datagram basis. This principle is the hardest to dispute and the FCC has correspondingly taken the clearest stand on it.
As for quantity-based discrimination:
We are, of course, always concerned about anti-consumer or anticompetitive practices, and we remain so here. However, prohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks. The framework we adopt today does not prevent broadband providers from asking subscribers who use the network less to pay less, and subscribers who use the network more to pay more. (paragraph 72)
So tiered service by quantity is permitted. Note that the wording given above is off the mark in terms of what efficiency dictates. It is not quantity per se that should be priced but rather congestion. A toll-road is a useful metaphor. From the point of view of efficiency, the purpose of a toll is to convey to drivers the social cost of their use of the road. When drivers must pay this social cost, they are induced to make the efficient decision whether to use the road by comparing it to their their private benefit.
The social cost is zero when traffic is flowing freely (no congestion) because they don’t slow anybody else down. So tolls should be zero during these periods. Tolls are positive only when the road is utilized at capacity and additional drivers reduce the value of the road to others.
So “lighter users subsidizing heavier users” sounds unfair but its really orthogonal to the principles of efficient network management. In an efficiently priced network the off-peak users are subsidized by the peak-users regardless of their total amount of usage. And this is how it should be not because of anything having to do with fairness but because of incentives for efficient usage.
There is one big problem with this toll-road metaphor when it comes to the Internet however. The whole point of peak-pricing is to signal to drivers that its costly now to drive. But when you are downloading content from the Internet things are happening too fast for you to respond to up-to-the-second changes in congestion. It is just not practical to have prices adjust in real time to changing network conditions as dictated by peak-load pricing. And without users being able to respond to congestion pricing their purpose would not be served by calculating prices ex post and sending users the bill at the end of the month.
Given this, it could be argued that a reasonable proxy is to charge users by their total usage. It’s a reasonable approximation that those with greater total usage are also most likely to be imposing greater congestion on others. And the FCC rules permit this. (Note that in particular, what is implied by tiered pricing as a proxy for congestion pricing is not a quantity discount but in fact a quantity surcharge. The per-datagram price is larger for heavier users.)
Discrimination by bandwidth is not directly addressed. It is therefore implicitly allowed because paragraph 73 reads “Differential treatment of traffic that does not discriminate among specific uses of the network or classes of uses is likely reasonable. For example, during periods of congestion a broadband provider could provide more bandwidth to subscribers that have used the network less over some preceding period of time than to heavier users.”
But the following paragraph comes from the section on Network Management.
Network Congestion. Numerous commenters support permitting the use of reasonable network management practices to address the effects of congestion, and we agree that congestion management may be a legitimate network management purpose. For example, broadband providers may need to take reasonable steps to ensure that heavy users do not crowd out others. What constitutes congestion and what measures are reasonable to address it may vary depending on the technology platform for a particular broadband Internet access service. For example, if cable modem subscribers in a particular neighborhood are experiencing congestion, it may be reasonable for a broadband provider to temporarily limit the bandwidth available to individual end users in that neighborhood who are using a substantially disproportionate amount of bandwidth. (paragraph 91)
At face value it gives well-intentioned providers the ability to manage congestion. But there doesn’t seem to be a clear statement about how this ability can be integrated with pricing. Can providers sell “managed” service at a discount relative to “premium” service? One re-assuring passage emphasizes that network management practices must be consistent with the no-discrimination-by-data-type mandate. So for example, congestion caused by high-bandwidth video must be managed equally whether it was from YouTube or Comcast’s own provided video services.
Finally, the rules permit what’s called “end-user controlled” discrimination, i.e. 2nd degree price-discrimination. This means that broadband providers are permitted to offer an array of pricing plans from which users select.
Maximizing end-user control is a policy goal Congress recognized in Section 230(b) of the Communications Act, and end-user choice and control are touchstones in evaluating the reasonableness of discrimination.215 As one commenter observes, “letting users choose how they want to use the network enables them to use the Internet in a way that creates more value for them (and for society) than if network providers made this choice,”and “is an important part of the mechanism that produces innovation under uncertainty.”216 Thus, enabling end users to choose among different broadband offerings based on such factors as assured data rates and reliability, or to select quality-of-service enhancements on their own connections for traffic of their choosing, would be unlikely to violate the no unreasonable discrimination rule, provided the broadband provider’s offerings were fully disclosed and were not harmful to competition or end users.
While this paints a too-rosy picture of the consumer-welfare effects of 2nd degree price-discrimination (it typically makes some consumers worse off and can easily make all consumers worse off) it seems hard to imagine how you can allow the kind of tiered pricing already discussed and not allow consumers to choose among plans.
So the FCC is allowing broadband providers to rollout metered service, possibly with quantity premiums, and there is a grey area when it comes to bandwidth restrictions. These are consistent with, but not implied by efficient pricing, and of course we are putting them in the hands of monopolists, not social planners. They certainly fall short of what net-neutrality hawks were asking for but it was wishful thinking to imagine that these changes were not coming.
I think that the no-blocking and no unreasonable discrimination rules are the core of net-neutrality as an economic principle and getting these is more than sufficient compensation for tiered pricing.
Final disclaimer: everything above applies to “fixed broadband providers” like cable or satellite. The FCC’s approach to mobile broadband can be summarized as “wait-and-see.”
Bad review < Good review < No review at all:
S. Irene Virbila, the L.A. Times’ restaurant critic for the last 16 years, was visiting Red Medicine restaurant in Beverly Hills on Tuesday night when she was approached by managing partner Noah Ellis, who took Virbila’s picture without her permission and then ordered Virbila and her three companions to leave, refusing them service.
Ellis posted her picture on the restaurant’s Tumblr site, explaining that she was not welcome there.
The LA Times food blog has the story. Other blogs have the picture. The Times is undeterred.
The Times will continue with its plans to review Red Medicine. The restaurant was chosen for review, Parsons said, because of its pedigree –- Ellis has worked in the past with noted chef and restaurateur Michael Mina. And, Parsons added, “We had hopes that they would be doing interesting things with Southeast Asian food. We will still review them.”
Subjects were given a sugar pill. They were told it was a sugar pill. They were told that sugar pills are not medicine. And yet they had better outcomes than the control group who were not treated at all.
Edzard Ernst, a professor of complementary medicine at the University of Exeter, says, “This is an elegant study which suggests that the ritual of giving a patient a remedy is clinically effective, even if that patient has been told that the remedy is a placebo.” Kaptchuk himself says, “I suspect that just performing “the ritual of medicine” could have activated or primed self-healing mechanisms.” And Amir Raz, a neuroscientist who studies placebos at McGill University, adds, “Scientific reports make it clear, even if strange and counterintuitive, that receiving – rather than the actual content of – medical treatment can trigger and propel a healing process.”
Notably, the patients (apparently even the control group) were told about the psychology of the placebo effect.
They told the patients that “placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.” And they explained: that “the placebo effect is powerful; the body can automatically respond to taking placebo pills like Pavlov’s dogs who salivated when they heard a bell; a positive attitude helps but is not necessary; and taking the pills faithfully is critical.”
There are many caveats and open questions, the full article is worth a read.

It sounds so simple: you’re nice you make the list, you’re naughty you get a stocking full of coal. But just how much of the year do you have to be nice?
It would indeed be simple if Santa could observe perfectly your naughty/nice intentions. Then he could use the grim ledger: you make the list if and only if you are nice all 365 days of the year. But it’s an imperfect world. Even the best intentions go awry. Try as you may to be nice there’s always the chance that you come off looking naughty due to misunderstandings or circumstances beyond your control. Just ask Rod Blagojevich.
And with 365 chances for misunderstanding, the grim ledger makes for a mighty slim list come Christmas Eve. No, in a world of imperfect monitoring, Santa needs a more forgiving test than that. But while it should be forgiving enough to grant entry to the nice, it can’t be so forgiving that it also allows the naughty to pass. And then there’s that dreaded third category of youngster: the game theorist who will try to find just the right mix of naughty and nice to wreak havoc but still make the list. Fortunately for St. Nick, the theory of dynamic moral hazard has it all worked out.
There exists a number T between 0 and 365 (the latter being a “sufficiently large number of periods”) with three key properties
- The probability that a truly nice boy or girl comes out looking nice on at least T days is close to 100%,
- The probability that the unwaveringly naughty gets lucky and comes out looking nice for T days is close to 0%,
- If you are being strategic and you are going to be naughty at least once, then you should go all the way and be unwaveringly naughty.
The formal statement of #3 (which is clearly the crucial property) is the following. You may consider being naughty for Z days and nice for the remaining 365-Z days and if you do your payoff has two parts. First, you get to be naughty for Z days. Second, you have a certain probability of making the list. Property #3 says that the total expected payoff is convex in Z. And with a convex payoff you want to go to extremes, either nice all year long or naughty all year long.
And given #1 and #2, you are better off being nice than naughty. One very important caveat though. It is essential that Santa never let you know how you are doing as the year progresses. Because once you know you’ve achieved your T you are in the clear and you can safely be naughty for the remainder. No wonder he’s so secretive with that list.
(The classic reference is Radner. More recently these ideas are being used in repeated games.)
Moving us one step closer to a centralized interview process (a good thing as I have argued), the Duke department of economics is posting video clips of job talks given by their new PhD candidates. Here is the Duke Economics YouTube Channel, and here is the talk of Eliot Annenberg (former NU undergrad and student of mine btw.) I expect more and more departments to be doing this in the future. (Bearskin bend: Econjeff)
While we are on the subject here is a recent paper that studies the Economics academic labor market (beyond the rookie market.) The abstract:
In this paper we study empirically the labor market of economists. We look at the mobility and promotion patterns of a sample of 1,000 top economists over thirty years and link it to their productivity and other personal characteristics. We find that the probability of promotion and of upward mobility is positively related to past production. However, the sensitivity of promotion and mobility to production diminishes with experience, indicating the presence of a learning process. We also find evidence that economists respond to incentives. They tend to exert more effort at the beginning of their career when dynamic incentives are important. This finding is robust to the introduction of tenure, which has an additional negative ex post impact on production. Our results indicate therefore that both promotions and tenure have an effect on the provision of incentives. Finally, we detect evidence of a sorting process, as the more productive individuals are allocated to the best ranked universities. We provide a very simple theoretical explanation of these results based on Holmström (1982) with heterogeneous firms.
via eric barker.
Today the commissioners of the FCC will meet to vote on a new proposed policy concerning Net Neutrality. It is expected to pass. Pundits, policymakers and media of all predispositions are hyperventilating over the proposal but none link to it and I can’t find the actual document anywhere. Does anybody have a link to it?
Commenting on Jonah Lehrer’s article on “The Truth Wears Off,” and how once rock-solid science eventually becomes impossible to replicate, Chris Blattman blames publication bias in all of its various forms.
The culprit? Not biology. Not adaptation to drugs. Not even prescription to less afflicted patients. Rather, it’s scientists themselves.
Journals reward statistical significance, and too many academics massage or select results until the magical two asterisks are reached.
But more worrisome is that much of the problem might be more unconscious: a profession-wide tendency to pay attention to, pursue, write up, publish, and cite unusually large and statistically significant findings.
This is all true, and it’s why you should reject out of hand studies like the one documenting “precognition” that made the rounds a few months ago. (Who’s gonna even mention, let alone publish a study reporting that “we tried but just couldn’t find evidence that people can see the future”?)
But do be careful: if there is a publication bias in favor of the unexpected, then you have just as much reason to doubt that the “truth wears off.” If a fact was first proven then disproven, was publication bias to blame for the proof or the disproof?
- A winner is declared in the contest to smuggle the phrase “I smoke crack rocks” into a published article.
- Surprisingly accurate account coupled with stunningly confused extrapolation of the Ellsberg ambiguity experiment.
- Julian Assange’s OKCupid profile.
- Salvia market boom after circulation of Miley Cyrus bong video.
- Where are you in the income distribution?
- FBI informant infiltrates Irvine, CA mosque. Muslims are so alarmed at his violent provocations they get a restraining order against him.
There is a new blog, Spousonomics, written by two MILRs, Paula Szuchman and Jenny Anderson. The topics are near and dear to the Cheap Talk heart: “Using economics to master love, marriage, and dirty dishes.” You can read about The Definition of A Good Marriage, Wiki-Marriage-Leaks, and Signaling for Sex.
They are doing a series on “Economists in Love” and based on my post on Hormone Neutraility, they assumed I was a sensitive guy and so they asked me a few questions like “Why are you an economist?” and “Is marriage a repeated game?” Of course the reality is that I am emotionally stuck in the 7th grade and so you can see my smart-ass answers here.
Also, there is a picture of me and my wife (she looking hot, me looking like I need a nap.)
Writing naked puts a man at risk of ruin due to financial crash blossoms.
Stare at it a while before clicking through for the disentanglement. (You might want to read the MR post which inspired it.)
In sports, high-powered incentives separate the clutch performers from the chokers. At least that’s the usual narrative but can we really measure clutch performance? There’s always a missing counterfactual. We say that he chokes if he doesn’t come through when the stakes are raised. But how do we know that he wouldnt have failed just as miserably under normal circumstances? As long as performance has a random element, pure luck (good or bad) can appear as if it were caused by circumstances.
You could try a controlled experiment, and probably psychologists have. But there is the usual leap of faith required to extrapolate from experimental subjects in artificial environments to professionals trained and selected for high-stakes performance.
Here is a simple quasi-experiment that could be done with readily available data. In basketball when a team accumulates more than 5 fouls, each additional foul sends the opponent to the free-throw line. This is called the “bonus.” In college basketball the bonus has two levels. After fouls 5-10 (correction: fouls 7-9) the penalty is what’s called a “one and one.” One free-throw is awarded, and then a second free-throw is awarded only if the first one is good. After 10 fouls the team enters the “double bonus” where the shooter is awarded two shots no matter what happens on the first. (In the NBA there is no “single bonus,” after 5 fouls the penalty is two shots.)
The “front end” of the one-and-one is a higher stakes shot because the gain from making it is 1+p points where p is the probability of making the second. By contrast the gain from making the first of two free throws is just 1 point. On all other dimensions these are perfectly equivalent scenarios, and it is the most highly controlled scenario in basketball.
The clutch performance hypothesis would imply that success rates on the front end of a one and one are larger than success rates on the first free-throw out of two. The choke-under-pressure hypothesis would imply the opposite. It would be very interesting to see the data.
And if there was a difference, the next thing to do would be to analyze video to look for differences in how players approach these shots. For example I would bet that there is a measurable difference in the time spent preparing for the shot. If so, then in the case of choking the player is “overthinking” and in the clutch case this would provide support for an effort-performance tradeoff.
Facebook, Buzz, Reader, and other social networking sites all have one thing in common: if you like something then you get to like it. But you never get to dislike what you dislike. (Sure you can unlike what you previously liked, but just as with that other interest rate you are constrained by the zero lower bound. You can’t go negative.)
This kind of system seems to pander to people such as me who obsessively count likes (and twitter followers, and google reader subscribers and…) because for people like us even a single dislike would be devastating. With only positive feedback possible we are spared the bad news.
But after a while we start to get the nagging suspicion that the lack of a like is tantamount to being disliked. We put ourselves in the mind of each individual reader. If she liked it then she will like it. If she didn’t like it, she would like to dislike it but she can’t. So she’s silent. But then if she was neutral she now knows that by being silent she is going to be pooled with with the dislike haters. She doesn’t want to hurt my feelings so she likes. Kindhearted but cruel: now I know that everyone who didn’t like indeed didn’t like. It’s exactly as if there was a dislike button. Despair.
But wait. One wrinkle saves our fragile ego. Some people are just too busy to like. Or they don’t know about the like button. And who knows exactly how many people read the article anyway. So a non-like could be any one of these. Which means that kindhearted neutrals can safely stay on the sidelines and pool with these non-participants. A pool big enough to drown out the haters. Joyful noise! And as a bonus I get to know for sure that the likers are likers and not just patronizers.
Finally there’s the personal aspect, it’s flattering to see who likes. The serial likers keep me going. Especially this one regular reader who by amazing coincidence has the same name as me and who likes everything I write.
(drawing: emotional baggage from www.f1me.net)
Islam forbids suicide. Of the world’s three Abrahamic faiths, “The Koran has the only scriptural prohibition against it,” said Robert Pape, a professor at the University of Chicago who specializes in the causes of suicide terrorism. The phrase suicide bomber itself is a Western conception, and a pretty foul one at that: an egregious misnomer in the eyes of Muslims, especially from the Middle East. For the Koran distinguishes between suicide and, as the book says, “the type of man who gives his life to earn the pleasure of Allah.” The latter is a courageous Fedayeen — a martyr. Suicide is a problem, but martyrdom is not.
From an article in the Boston Globe on the psychology of suicide bombers.
A white bank robber in Ohio recently used a “hyper-realistic” mask manufactured by a small Van Nuys company to disguise himself as a black man, prompting police there to mistakenly arrest an African American man for the crimes.
You hear this a lot in Chicago. “We are having a cold snap because there is a low-pressure system over the Midwest and a high-pressure system to the North. This causes windy conditions which brings cold air down from Canada.”
This sounds better than just saying “it’s cold today” but I can’t tell if it really is saying anything more than that. First of all, as I have said before the following two statements are equivalent, at least empirically:
- The air is colder than usual
- The air was blown here from some place colder than here.
So telling me that the air came from Canada isn’t telling me much more than I already knew, it’s cold. But the extra bit here seems tautological at an even deeper level because these two statements:
- The air is blowing from down Canada
- There is high pressure in the North and low pressure here
appear to be literally the same thing. Why else would the air move from position A to position B if it were not due to pressure imbalances?
Is meteorology really just like finance? (“Stocks fell today because of bearish investors”) Or is there a non-circular way of explaining my frozen toes that just doesn’t fit into a 30 second weather report?
(drawing: glooming from http://www.f1me.net)

To use the justice system most effectively to stop leaks you have to make two decisions.
First, you have to decide what will be a basis for punishment. In the case of a leak you have essentially two signals you could use. You know that classified documents are circulating in public, and you know which parties are publishing the classified documents. The distinctive feature of the crime of leaking is that once the documents have been leaked you already know exactly who will be publishing them: The New York Times and Wikileaks. Regardless of who was the original leaker and how they pulled it off.
That is, the signal that these entities are publishing classified documents is no more informative about the details of the crime than the more basic fact that the documents have been leaked. It provides no additional incentive benefit to use a redundant signal as a basis for punishment.
Next you have to decide who to punish. Part of what matters here is how sensitive that signal is to given actor’s efforts. Now the willingness of Wikileaks and The New York Times to republish sensitive documents certainly provides a motive to leakers and makes leaks more likely. But what also matters is the incentive-bang for your punishment-buck and to deter all possible outlets from mirroring leaks would be extremely costly. (Notwithstanding Joe Lieberman.)
A far more effective strategy is to load incentives on the single agent whose efforts have the largest effect on whether or not a leak occurs: the guy who was supposed to keep them protected in the first place. Because when a leak occurs, in addition to telling you that some unknown and costly to track person spent too much effort trying to steal documents, it tells you that your agent in charge of keeping them secret didn’t spend enough effort doing the job you hired him to do.
You should reserve 100% of your scarce punishment resources where they will do the most good, incentivizing him (or her.)
(Based on a conversation with Sandeep.)
Update: The Australian Government seems to agree. (cossack click: Sandeep)
For 4.6 billion years, the Sun has provided free energy, light, and warmth to Earth, and no one ever realized what a huge moneymaking opportunity is going to waste. Well, at long last, the Sun is finally under new ownership.
Angeles Duran, a woman from the Spanish region of Galicia, is the new proud owner of the Sun. She says she got the idea in September when she read about an American man registering his ownership of the Moon and most of the planets in the Solar System – in other words, all the celestial bodies that don’t actually do anything for us.
Duran, on the other hand, snapped up the solar system’s powerhouse, and all it cost her was a trip down to the local notary public to register her claim. She says that she has every right do this within international law, which only forbids countries from claiming planets or stars, not individuals:
“There was no snag, I backed my claim legally, I am not stupid, I know the law. I did it but anyone else could have done it, it simply occurred to me first.”
She will soon begin charging for use. I advise her to hire a good consultant because pricing The Sun is not your run-of-the-mill profit maximization exercise. First of all, The Sun is a public good. No individual Earthling’s willingness to pay incorporates the total social value created by his purchase. So it’s going to be hard to capitalize on the true market value of your product even if you could get 100% market share.
Even worse, its a non-excludable public good. Which means you have to cope with a massive free-rider problem. As long as one of us pays for it, you turn it on, we all get to use it. So if you just set a price for The Sun, forget about market share, at most your gonna sell to just one of us.
You have to use a more sophisticated mechanism. Essentially you make the people of Earth play a game in which they all pledge individual contributions and you commit not to turn on The Sun unless the total pledge exceeds some minimum level. You are trying to make each individual feel as if his pledge has a chance of being pivotal: if he doesn’t contribute today then The Sun doesn’t rise tomorrow.
A mechanism like that will do better than just hanging a simple price tag on The Sun but don’t expect a windfall even from the best possible mechanism. Mailath and Postlewaite showed, essentially, that the maximum per-capita revenue you can earn from selling The Sun converges to zero as the population increases due to the ever-worsening free-rider problem.
You might want to start looking around for other planets in need of a yellow dwarf and try to generate a little more competition.
(Actual research comment: Mailath and Postlewaite consider efficient public good provision. I am not aware of any characterization of the profit-maximizing mechanism for a fixed population size and zero marginal production cost.)
[drawing: Move Mountains from http://www.f1me.net]




