You are currently browsing the category archive for the ‘Uncategorized’ category.
According to Reuters:
British Airways (BA) could be fined up to 80 million euros (71 million pounds) next month for fixing cargo prices with other carriers, a source with direct knowledge of the case said on Tuesday.
A number of firms played “cooperate”:
The European Commission charged BA, Air France-KLM, SAS and several other airlines in December 2007 with taking part in an air freight cartel.
One played “defect”:
Lufthansa previously said it had immunity as it alerted the Commission to the cartel.
(Hat Tip: Amit Patel)
In the wake of the Nobel for the search theory of unemployment, let’s talk about the search models that really matter: hooking up.
Everybody who reads this blog understands the Prisoner’s Dilemma. Play it just once and neither side will cooperate. So a simple theory of relationships is based on a repeated Prisoner’s Dilemma. When the relationship can potentially continue, there is now an incentive to cooperate today in order to maintain cooperation in the future. Put differently, the threat of a future breakdown of cooperation enforces cooperation today.
But things get interesting when we embed this into a search and matching model. Out of the large pool of the unmatched, two singles get “matched” and they start a relationship, i.e. a repeated prisoner’s dilemma. As long as the relationship continues each decides whether to cooperate or defect and at any stage either party can break-up the relationship and go find another match.
This possibility of breaking up the match adds a new friction to relationships. The threat of a breakdown in the current relationship is not enough anymore to incentivize cooperation because that threat can be avoided by leaving. And indeed, it’s not an equilibrium anymore for relationships to work efficiently because then any partner can cheat in his current relationship and then immediately go find another partner (who, expecting cooperation, is the next sucker, etc.)
Something has to give to maintain incentives. What’s the best way to make relationships just inefficient enough to keep as much cooperation as possible? A simple solution is to “start small:” At the beginning of any relationship there is a trial phase where the level of cooperation is purposefully low, and only after both partners remain in the relationship through the trial phase do they start to get-it-, er, cooperate.
This courtship ritual is privately wasteful but socially valuable. Once I am in a relationship I am willing to wait through the trial phase because the reward of cooperation is waiting for me at the end. And once the trial phase is over I have no incentive to cheat because then I would just have to go through the trial phase again with my new partner. Equilibrium is restored.
There are a number of different spins on this idea in the literature. There was an early series of papers by Joel Watson based on a model with incomplete information. I remember really liking this paper by Lindsey, Polak, and Zeckhauser on “Free Love, Fragile Fidelity, and Forgiveness.” And this quarter, we heard David McAdams with a new perspective on things, including some conditions under which courtship can be dispensed with altogether and partners can get right down to business.
Eric Budish and Estelle Cantillon study this issue in a recent paper. Suppose any mechanism that allocates students to slots in courses must be ex post efficient – i.e. there should be no room for making trades that make all students better off ex post – and that it is must be a dominant strategy for students to report their preferences truthfully (strategyproofness). It is well-known that the only mechanism that satisfies these two properties is random serial dictatorship (RSD): students are randomly assigned in order and each student gets to pick his entire course schedule in order.
In practice, at HBS, students choose one course at a time. This cannot satisfy ex post efficiency and strategy-proofness and students must lie in equilibrium. How do they lie and what welfare implications does it have relative to RSD? Budish and Cantillon are able to answer this question because they have two sets of data: (1) they have data on how students actually played the mechanism and (2) they have data on the students’ true preferences from a survey. The second set of data is rarely available.
First, by comparing actual play with reported preference they can see how students game the system. Second, by using the reported preferences in the survey they can simulate what would have happened in the RSD. They can then compare the RSD outcome with the outcome in the mechanism used and make judgements about welfare.
Their results: Students gamed the system to take slots in popular courses. For example, even if a popular course ranked low in a student’s ranking, he signed up for the course at the earliest opportunity fearing it would be gone if he waited. This causes congestion and popular courses fill up quickly and people who value the course highly may not get in. Moreover, by the time the student bids on less popular courses, that course may be full even though it is high in his own ranking. So, there are two inefficiencies. But still the HBS mechanism can dominate the RSD in terms of ex ante welfare. If a student is picked late and really values some course highly it may already be gone by the time they get to pick. At least the HBS mechanism gives them some change to get some of their picks in early. Despite the gaming, this can increase welfare ex ante.
This is a cool paper. Some predictions of theory are borne out which is always nice. More importantly, the welfare comparisons using real data make the paper quite original. Perhaps there is some room to tinker with the HBS mechanism and tis might lead to other insights.
As someone who knows more about Bayesian mechanism design than the strategyproofness literature, I am still trying to grope my way towards some unification of these approaches. It seems one should start off with some welfare criterion and some solution concept and maximize welfare subject to incentive constraints which depend on the solution concept. If the solution concept is dominant strategy equilibrium, then this leads to dominant strategy incentive constraints. This may not lead to ex post efficient allocations (see Jeff’s timely post about Myerson-Satterthwaite!) but it can improve ex ante welfare. There is a large literature on Bayesian mechanism design; there is large literature on revenue equivalence. But even in the latter case, the mechanism designer has a belief. I guess this implies there is some room for dominant strategy mechanism design without a prior which I suppose is robust mechanism design. Has this been approach been applied to market design? My impression is that it has not…..
Tyler Cowen explores economic ideas that should be popularized. Let me take this opportunity to help popularize what I think is one of the pillars of economic theory and the fruit of the information economics/game theory era.
When we notice that markets or other institutions are inefficient, we need to ask compared to what? What is the best we could possibly hope for even if we could design markets from scratch? Myerson and Satterthwaite give the definitive answer: even the best of all possible market designs must be inefficient: it must leave some potential gains from trade unrealized.
If markets were perfectly efficient, whenever individual A values a good more than individual B it should be sold from B to A at a price that they find mutually agreeable. There are many possible prices, but how do they decide on one? The Myerson-Satterthwaite theorem says that, no matter how clever you are in designing the rules of negotiation, inevitably it will sometimes fail to converge on such a price.
The problem is one of information. If B is going to be induced to sell to A, the price must be high enough to make B willing to part with the good. And the more B values the good, the higher the price it must be. That principle, which is required for market efficiency, creates an incentive problem which makes efficiency impossible. Because now B has an incentive to hold out for a higher price by acting as if he is unwilling to part with the good. And sometimes that price is more than A is willing to pay.
From Myerson-Satterthwaite we know what the right benchmark is for markets: we should expect no more from them than what is consistent with these informational constraints. It is a fundamental change in the way we think about markets and it is now part of the basic language of economics. Indeed, in my undergraduate intermediate microeconomics course I give simple proof of a dominant-strategy version of Myerson-Satterthwaite, you can find it here.
(Myerson won the Nobel prize jointly with Maskin and Hurwicz. There should be a second Nobel for Myerson and Satterthwaite.)
Fiction can only be so strange because, as fiction, it quickly loses credibility if it gets too strange. The audience loses the willingness to suspend disbelief. When truth is strange it is truly strange.
Of course truth is strange only by accident. So truth will be less strange on average than fiction because fiction is intentionally strange. But measured by their peaks, truth will be stranger than fiction.
(Not a post about Juan Williams.) From the comments in a post from Jonathan Weinstein:
In fact, there is a simple procedure to simulate a (exactly) unbiased random coin from a biased one. Flip your coin twice (and repeat the procedure if you obtain the same outcome). Call “Heads” if you first got heads than tails, and “Tails” otherwise.
Check out the whole discussion to see how this relates to Ultimate Frisbee, the essential randomness of the last digit in any large integer, Mark Machina, and Fourier analysis over Abelian groups.
From the latest issue of the Journal of Wine Economics, comes this paper.
The purpose of this paper is to measure the impact of Robert Parker’s oenological grades on Bordeaux wine prices. We study their impact on the so-called en primeur wine prices, i.e., the prices determined by the chaˆteau owners when the wines are still extremely young. The Parker grades are usually published in the spring of each year, before the wine prices are established. However, the wine grades attributed in 2003 have been published much later, in the autumn, after the determination of the prices. This unusual reversal is exploited to estimate a Parker effect. We find that, on average, the effect is equal to 2.80 euros per bottle of wine. We also estimate grade-specific effects, and use these estimates to predict what the prices would have been had Parker attended the spring tasting in 2003.
Note that the €2.80 number is the effect on price from having a rating at all, averaging across good ratings and bad. You do have to buy some identifying assumptions, however.
Here he writes about underappreciated economist Eric van den Steen. Tyler is right, Eric van den Steen is underappreciated. His work is fresh and creative and he is venturing into terrain (heterogenous priors) where few dare to tread. Not only that but he is drawing out credible applied ideas from there, not just philosophy. (Ran Spiegler and Kfir Eliaz are two others that come to mind with the same creativity and courage to embrace these models.)
Tyler Cowen is under-appreciated. Not as a blogger of course, he writes the most popular blog in economics and one of the most popular blogs full stop. It may sound strange, especially to readers of Marginal Revolution, but Tyler Cowen is an under-appreciated economist.
Here is his CV. Here is his google scholar listing. Here is the ranking of his economics department. If it were not for Marginal Revolution, very few economists would know who Tyler Cowen is.
But we all read Marginal Revolution. And we all know that Tyler is smarter, broader, more knowledgeable, more intuitive than most of us and our colleagues. If he wanted it to, his CV could run circles around ours. I don’t claim to know why he doesn’t want that, but I infer that Tyler believes he is innovating a new way to be a successful and influential economist without compromising on the very high standards that those of us in the old regime hold.
Public signals like Google scholar cites, and top-journal publications can’t measure his contribution to economics but we measure it privately every day when we read Marginal Revolution. And it deserves to be made public: Tyler Cowen is a great economist.
One doesn’t just accidentally know who Eric van den Steen is, let alone be able to summarize in a paragraph his contribution and its relation to the literature. I barely knew who he was and its my job as a member of Northwestern’s recruiting committees to know. For Tyler Cowen to be able to pick him out of the very many young economists and identify him as the most under-appreciated reveals that Tyler Cowen knows and reads every economist. I believe it is true.
And he understands them better than most of us. Look at what he wrote about Dale Mortensen. And the Mortensen-Pissarides model. Here’s Tyler re-arranging the literature on sticky prices, trade, and monetary policy. His piece on free parking shows a mastery of the lost art of price theory, whether or not you agree with his final conclusion. Look at his IO reading list for crying out loud. Finally, set aside an hour and watch him in his element speaking at Google about incentives and prizes.
So hail T-Cow! Wunder-(not)-kind!
The seminal (economist’s!) answer to this question has been offered by my old teacher in grad school and my colleague till a few years ago, Kathy Spier, in her paper “Incomplete Contracts and Signaling”. As her title suggests, her core idea is based on signaling: an informed party making an offer in a game signals his private information via the offer. An offer that carries a negative inference may not be made. Kathy’s model is quite complex but it’s central logic is captured in a passage from her paper:
A fellow might hesitate to ask his fiancée to sign a prenuptial agreement…. because to do so would lead her to believe that the quality of the marriage – or the probability of divorce – are higher than she had thought.
In the new century, roles are reversed – the wealthy partner might be female and the poor one male. If there is no pre-nup, the man can extract a large fraction of his ex-wife’s wealth after a divorce. In that situation, to signal his love, the man should offer to sign a pre-nup that gives him none of his ex-wife’s fortune. If he is confident the marriage will survive, divorce is impossible anyway , so why worry about income in an impossible event?
Alas, as the poets have long told us, the path of true love does not run smooth – the most well-intentioned and loving couple can find their marriage has hit the rocks. Then, there will be much regret and perhaps desperate, legal action to extract enough cash to live in the style to which one has become accustomed.
And so I turn finally to this sad case in the British courts:
When Katrin Radmacher and Nicolas Granatino married in 1998, she insisted it had been for love, not for money. That was why the wealthy German heiress had ensured that her banker husband signed a prenuptial agreement promising to make no claims on her fortune if the marriage failed. It was, she said, “a way of proving you are marrying only for love”.
Once the love had gone, however – the couple separated in 2006 – the fortune remained, and Granatino, by then a mature student at Oxford, decided to challenge the prenup, which they had signed in Germany before marrying and divorcing in Britain, arguing it had no status in English law.
But Granatino lost.
I’m sure a research paper can come out of this: two-sided incomplete information, two-sided signaling and optimal contracting…..I’m too busy keeping my marriage alive to have the time to write it.
The threat of the death penalty makes defendants more willing to accept a given plea bargain offer. But a tough-on-crime DA takes up the slack by making tougher offers. What is the net effect? A simple model delivers a clear prediction: the threat of the death penalty results in fewer plea bargains and more cases going to trial.
The DA is like a textbook monopolist but instead of setting a price, he offers a reduced sentence. The defendant can accept the offer and plead guilty or reject and go to trial taking his chances with the jury. Just like the monopolist, the DA’s optimal plea offer trades off marginal benefit and marginal cost. When he offers a stiffer sentence, the marginal benefit is that defendants who accept it serve more time. The marginal cost is that it is more likely that the defendant rejects the tougher offer, and more cases goes to trial. The marginal defendant is the one whose trial prospects make him just indifferent between accepting and rejecting the plea bargain.
Introducing the death penalty changes the payoff to a defendant who rejects a plea deal (his reservation value.) The key observation is that this change affects defendants differently according to their likelihood of conviction at trial. Defendants facing a difficult case are more likely to be convicted and suffer the increased penalty. (Formally, the reservation value is now steeper as a function of the probability of conviction.)
One thing the DA could do is increase the sentence in his plea bargain offer just enough that the pre-death-penalty marginal defendant is once again indifferent between accepting and rejecting. The rate of plea bargains would then be the same as before the death penalty.
But he can do better by offering an even tougher sentence. The reason: his marginal benefit of such a move is the same as it was pre-death penalty (the same infra-marginal defendants serve more time) but the marginal cost is now lower for two reasons. First, compared to the no-death penalty scenario, fewer defendants reject the tougher offer. Because we are moving along a steeper reservation value curve. Second, those who do reject now get a stiffer penalty (death) conditional on conviction.
The DA’s tougher stance in plea bargaining means that fewer defendants accept and more cases go to trial. Evidence? Here is one paper that shows that re-instatement of the death penalty in New York lead to no increase in the rate of plea bargains accepted (and a clear decrease in the size of plea bargain offers.)
The Justice Department is suing Blue Cross Blue Shield of Michigan for its use of “most favored nation” clauses in contracts with hospitals:
Blue Cross and Blue Shield, like most insurers, contracts with hospitals, doctors, labs and other providers for services. The lawsuit took direct aim at contract clauses stipulating that no insurance companies could obtain better rates from the providers than Blue Cross. Some of these contract provisions, known as “most favored nation” clauses, require hospitals to charge other insurers a specified percentage more than they charge Blue Cross — in some cases, 30 to 40 percent more, the lawsuit said.
This kind of contract has several effects. Most obviously, it deters entry by other health insurance companies that automatically face higher costs than BCBS Michigan because of the most favored nation clause. BCBS then has more monopoly power and can charge higher prices for its products. Second, a competitor is going to have a hard time negotiating a low price with a hospital as any low price they negotiate also has to be passed on to BCBS to maintain the 40% difference. The hospital is in effect giving a double discount and is less likely to accept such a large cut in profits on a large BCBS contract to get incremental business from a small health insurance company. Third, BCBS has less incentive to negotiate lower prices for itself: its competitors prices are automatically higher anyway. BCBS is even willing to pay higher prices to hospitals to get the most favored nation clause put in:
The lawsuit also asserts that Blue Cross, in effect, bought protection from competition — by agreeing to pay higher prices to certain hospitals to induce them to agree to the “most favored nation” clauses.
Lots of interesting stuff. I’m looking forward to following this case to see what happens.
Which type of artist debuts with obscure experimental work, the genius or the fraud? Kim-Sau Chung and Peter Eso have a new paper which answers the question: it’s both of these types.
Suppose that a new composer is choosing a debut project and he can try a composition in a conventional style or he can write 4’33”, the infamous John Cage composition consisting of three movements of total silence. Critics understand the conventional style well enough to assess the talent of a composer who goes that route. Nobody understands 4’33” and so the experimental composer generates no public information about his talent.
There are three types of composer. Those that know they are talented enough to have a long career, those that know they are not talented enough and will soon drop out, and then the middle type: those that don’t know yet whether they are talented enough and will learn more from the success of their debut. In the Chung-Eso model, the first two types go the experimental route and only the middle type debuts with a conventional work.
The reason is intuitive. First, the average talent of experimental artists must be higher than conventional artists. Because if it were the other way around, i.e. conventional debuts signaled talent then all types would choose a conventional debut, making it not a signal at all. The middle types would because they want that positive signal and they want the more informative project. The high and low types would because the positive signal is all they care about.
Then, once we see that the experimental project signals higher than average talent, we can infer that it’s the high types and the low types that go experimental. Both of these types are willing to take the positive signal from the style of work in exchange for generating less information by the actual composition. The middle types on the other hand are willing to forego the buzz they would generate by going experimental in return for the chance to learn about their talent. So they debut conventionally.
Now, as the economics PhD job market approaches, which fields in economics are the experimental ones (generates buzz but nobody understands it, populated by the geniuses as well as the frauds) and which ones are conventional (easy to assess, but generally dull and signals a middling type) ?
Drug addicts across the UK are being offered money to be sterilised by an American charity.
Project Prevention is offering to pay £200 to any drug user in London, Glasgow, Bristol, Leicester and parts of Wales who agrees to be operated on.
The first person in the UK to accept the cash is drug addict “John” from Leicester who says he “should never be a father”.
Probably everyone would agree that a better contract would be one that offers payment for regular use of contraception, rather than irreversible sterilization. Sterilization is probably a “second-best” because it is easier to monitor and enforce.
But it takes sides in the addict’s conflicting preferences over time. He is trading off money today versus children in the future. For some, that’s what makes it the right second-best. For others that’s what makes it exploitation.
Here is more.
Suppose I want to divide a pie between you and another person. It is known that the other person would get value from a fraction
of the pie (that is, each “unit” of pie is worth 1 to him), but your value is known only to you. You value a fraction
of the pie at
dollars but nobody but you knows what
is.
My goal is to allocate the pie efficiently. If both of you are selfish, then this means that I would like to give all the pie to him if and all the pie to you otherwise. And if you are selfish then I can’t get you to tell me the truth about
. You will always say it is larger than 1 in order to get the whole pie.
But what if you are inequity averse? Inequity aversion is a behavioral theory of preferences which is often used to explain non-selfish behavior that we see in experiments. If you are inequity averse your utility has a kink at the point where your total pie value equals his. When you have less than him you always like more pie both because you like pie and because you dislike the inequality. When you have more than him you are conflicted because you like more pie but you dislike having even more than he has.
In that case, my objective is more interesting than when you are selfish. If is not too much larger than 1, then both you and he want perfect equity. So that’s the efficient allocation. And to achieve that, I should give you less pie than he because you get more value per unit. And now as we consider variations in
, increases in
mean you should get even less! This continues until
is so much larger than 1 that your value for more pie outweighs your aversion to inequity, and now you want the whole pie (although he still wants equity.)
And its now much easier to get you to tell me the truth. You will always tell me the truth when your value of is in the range where perfect equity is the unique efficient outcome because that way you will get exactly what you want. Beyond that range you will again have an incentive to lie about
to get as much pie as possible.
So inequity aversion has a very clear implication for an experiment like this. If the experimenter is promising always to divide the pie equitably and is asking the subject to report his value of , then inequity averse subjects will do only two possible things: tell the truth, or exaggerate their value as much as possible. They will never understate their value.
I would be curious to see if there are any experiments like this.
I am looking for a Northwestern student who can help me with a very small data collection task. PhD student preferred. It wouldn’t take more than a few hours and I would pay you the standard RA rate plus eventual fame and glory.
Our water pipes are old in Evanston and need replacing. What does that mean?…….More taxes of course!
But the Water Division of the City of Evanston is managed by sharp dedicated operators who know a thing or two about running a business.
First, they understand good PR. They are running tours of the water treatment facility just north of the NU campus on Lincoln and Sheridan. Our kids wanted to go of course. I thought the whole group traipsing through would be made up of parents and kids. But no! We had retirees asking very technical questions, interested citizens wondering about water purity and funny smells caused by dead algae and a middle-aged couple out on a date. We were all charmed by the head of Evanston Water and his entourage and quite convinced by his arguments for higher taxes.
But he also has another plan. Many of the northern suburbs get their water either from Chicago or Evanston and are tied into long term contracts. As the contracts expire, Evanston and Chicago will compete for business and Evanston is at an advantage. Our cost of production is slightly higher – Chicago can generate greater economies of scale. But Chicago has more large consumers. If it gives one of them a discount, say Morton Grove, then others will clamour for the same deal. Evanston does not face this issue to the same extent. We can safely undercut Chicago’s price to Morton Grove without passing on the same discount to other towns we supply. We have a good old competitive advantage. Then, we can use the profits to upgrade our water system without raising taxes so much.
Another interesting tidbit: Even if we sell water to Skokie at 60c per unit, they charge much more to their citizens. They add a Skokie margin onto the margin Evanston is already charging them: so called double-marginalization. Demand for water is inelastic so double marginalization does not have much effect. But if it were elastic, then prices would be lower and consumer welfare and even profits higher if the Skokie and Evanston Utilities merge and set prices like an integrated monopolist. (This is ignoring the fact that prices are regulated in some way and I do not have the details of exactly how.) Perhaps there are other examples like this where one area sells a public good to another where double marginalization is more important.

Apart from a certain solitary activity, all other sensations caused by our own action are filtered out or muted by the brain so that we can focus on external stimuli. There is a famous experiment which demonstrates an unintended consequence of this otherwise useful system.
You and I stand before each other with hands extended. We are going to take turns pressing a finger onto the other’s palm. Each of us has been secretly instructed to each time try and match the force the other has applied in the previous turn.
But what actually happens is that we press down on each other progressively harder and harder at every turn. And at the end of the experiment each of us reports that we were following instructions and it was the other that was escalating the pressure. Indeed, the subjects in these experiments were asked to guess the instructions given to their counterpart and they guessed that the others were instructed to double the pressure.
What’s happening is that the brain magnifies the sensastion caused by the other’s pressing and mutes the sensation caused by our own. Thus, each of us underestimates the pressure when it is caused by our own action. (In a control experiment the force was mediated by a mechanical device –and not the finger directly– and there was no escalation.) So each subject believes he is following the instructions but in fact each is contributing equally to the escalating pressure.
You are invited to extrapolate this idea to all kinds of social interaction where you are being perfectly polite, reasonable, and accomodating, but he is being insensitive, abrasive, and stubborn.
For while O’Donnell crusaded against masturbation in the mid-1990s, denouncing it as “toying” with the organs of procreation and generally undermining baby making, the facts are to the contrary. Evidence from elephants to rodents to humans shows that masturbating is—counterintuitively—an excellent way to make healthy babies, and lots of them. No one who believes in the “family” part of family values can let her claims stand.
You will find that opening paragraph in an entertaining article in Newsweek (lid lob: linkfilter.) It surveys a variety of stories suggesting that masturbation serves an adaptive role and was selected for by evolution. The stories given (hygiene, signaling (??)) are mostly of the just-so variety, but this is a case where we don’t need to infer exactly the reason. We can prove the evolutionary advantage of masturbation by a simple appeal to revealed preference.
There are lots of ways we can touch ourselves and among these, Mother Nature has revealed a very clear preference. You cannot tickle yourself. Because the brain has a system for distinguishing between stimuli caused by others and stimuli caused by ourselves. Nature puts this system to good use: such a huge fraction of sensory information comes from incidental contact with yourself that it has to be filtered out so that we can detect contact with others.
Mother Nature could have used this same system to put an end to masturbation once and for all: simply detect when its us and mute the sensation. No gain, no Spain. Instead, she made an exception in this case. She must have had a good reason.

The most widely cited study on the effect of cell phone usage on traffic accidents is this one by Redelmeier and Tibshirani in the New England Journal of Medicine. Their conclusion is that talking on the phone leads to a fourfold increase in accident risk.
Their method is interesting. It’s called a case crossover design, and it works like this. We want to know the odds ratio of an accident when you talk on the phone versus when you don’t. Let’s write it like this, where is the event of an accident and
is the event of talking on a cell phone while driving.
.
But we have no way of estimating numerator or denominator from traffic accident data because we would need to know the counterfactuals of how often people drive (with and without talking on the phone) and don’t have accidents. Case crossover studies are based on a little algebraic trick which transforms the odds ratio into something we can estimate, with just a little more data. Using Bayes’ rule and two lines of algebra, we can rewrite it like this.
.
From accident data we can estimate the first term on the right-hand-side. We just calculate the fraction of accidents in which someone was talking on the phone. The finesse comes in when we estimate the second term. We don’t want to just estimate the overall frequency of cell phone use because we estimated the first term using a selected sample of people who had accidents. They may be different from the population as a whole. We want the cellphone usage rates for the people in our sample.
Case crossover studies take each person in the data who had an accident and ask them to report whether they were talking on the phone while driving at the same time of day one week before. Thus, each person generates their own control case. It’s a valid control because its the same person, driving at the same time, and on average therefore under the same conditions. These survey data are used to estimate the second term.
It’s really clever and its used a lot in epidemiological studies. (People get sick, some were exposed to some potential hazard, others not. The method is used to estimate the increase in risk of getting sick due to being exposed to the hazard.)
I have never seen it in economics however. In fact, this was the first I ever heard of it. So its natural to wonder why. And it doesn’t take long before you see that it has a serious weakness when applied to data with a lot of heterogeneity.
To see the problem, suppose that there are two types of people. The first group, in addition to being generally accident prone are also easily distracted. Everyone else is a safe driver and talking on cellphones doesn’t make them any less safe. Then our sample of people who actually had accidents would consist disproportionately of the first group. We would be estimating the effect of cell phone use on them alone. If they make up a small fraction of the population then we are drastically overestimating the increase in risk.
It’s fair to say that at best we can use the estimate of 4 as an upper bound on the risk ratio averaging over the entire population. That population average could be zero and still be consistent with the findings from case crossover studies. And there is no simple way to remedy the problems with this method. So I think there is good reason to approach this question from a different direction.
As I described before, if cell phone distractions increase accident risk we would see it by comparing the population of drivers to drivers with hearing impairment, who don’t use cell phones. And it turns out that the data exist. In the NHTSA’s database of traffic accidents, there is this variable:
P18 Person’s Physical Impairment
Definition: Identifies physical impairments for all drivers and non-motorists which may have contributed to the cause of the crash.
And “deaf” is impairment number 9.
The prize has been awarded to Peter Diamond, Dale Mortensen and Chris Pissarides.
First, let me bask in some Nobel glory and say “I called it!”: in a post last week, I used the Kellogg/NU data to predict this prize. This goes to show that “information aggregation and voting” has one data point in its support.
Dale Mortensen is at Northwestern so I know him a little. I remember having a very fun conversation with Dale and his wife and Ed Prescott (before he won the Nobel Prize himself) at a Schwartz dinner at Northwestern. There was a quite lively discussion of the Iraq war led by Dale’s wife. I’ve never met Chris Pissarides. All I can say as someone raised in the U.K. is that it’s great that a professor at L.S.E. won the Prize. L.S.E. is an amazing intellectual, cosmopolitan institution. Hayek and Coase spent formative years there. It’s wonderful that Pissarides got his PhD at LSE and has spent almost his entire career there. I visited MIT last year but I was too intimidated by Diamond to strike up a conversation!
Here is my attempt to offer some explanation of some papers. My choices are somewhat idiosyncratic as they are the papers I have read rather than perhaps their key papers.
Peter Diamond has a classic paper A Model of Price Adjustment in the Journal of Economic Theory in 1971. Diamond shows that even an infinitesimal search cost can lead to monopoly pricing rather than competitive pricing because of a hold up problem. Suppose there is no search cost and two firms are selling an identical good. The logic of (Bertrand) competition means they will both end up pricing at cost. At any higher price, one firm can undercut the other and capture the entire demand rather than half the demand and double its profit.
Instead suppose there is a small search cost e>0 a consumer must pay to discover the price. Pricing at cost is no longer an equilibrium – one firm can raise its price by almost e. The consumer discovers the higher price once he enters the store. But going to the other store to get a lower price involves a transactions cost of e anyway. So, it is better to submit to hold-up and pay the higher price. This logic obtains at all prices lower than the monopoly price. At that point you do not want to raise the price any more as consumers simply stop buying at a rate than makes further price increases lead to lower profits. So, a small search cost reverses the intuition about pricing completely.
Diamond has made seminal contributions to many areas. He has worked on general equilibrium with incomplete markets, the overlapping-generations model and on public finance (Diamond-Mirrlees).
Of Dale Mortensen’s papers, I know Property Rights and Efficiency in Mating, Racing and Related Games in the American Economic Review in 1982. Suppose parties are trading and have to invest ex ante to increase the ex post value of trade. The investment could be search for a trading partner or R-and-D investment etc. If they do not trade, each goes back into the search market to trade with someone else. If they do trade, any surplus they generate is split 50-50. The latter property implies there a kind of tax on ex ante investment and generates underinvestment. In common with Diamond, there is not only search but also ex post hold-up. In Diamond, the price can be increased by the firm behind the veil of secrecy. In Mortensen, ex post haggling over price generates hold-up. The Mortensen model in AER is closely related to the Grossman-Hart model of incomplete contracts and property right allocation and also to last year’s prize to Oliver Williamson.
Pissarides’s AER 1985 paper Short-Run Equilibrium Dynamics of Unemployment, Vacancies, and Real Wages also assumes unemployed workers and firms bargain over wages ex post. Their share of the surplus depends on their outside option which is turn depends on the tightness of the labor market – intuitively, the more workers are unemployed, the lower the wage firms can charge. In fact, in the simple model Pissarides proposes, it is possible to derive explicit solutions relating unemployment to wages and vacancies and even take the model to data. Hence, the Diamond-Mortensen-Pissarides model has become a canonical model with which to study unemployment. It has been extended in many directions by many authors including Mortensen and Pissarides together.
I guess the DMP model is being used to study unemployment dynamics in the current recession and to propose policy responses. A highly timely prize.
(WordPress doesnt allow automatic pushing of updates so you will have to click refresh to see updates.)
And here we are waiting outside the Royal Swedish Academy of Sciences.
The crowd is gathering, everyone here is really tall.
From my understanding, the committee is inside meeting right now, holding the final vote to decide the 2010 Nobel Prize in Economic Sciences.
I am not sure exactly what happens if the committee’s recommendation doesn’t pass this final vote. Maybe then they just put the prize up for auction.
Which of course means that either Milgrom will win, given his expertise, or Mankiw would win, given his textbook riches.
On the other hand if it came down to arm wrestling, Matt Rabin would be the 2010 Laureate.
I am sure Matt would share it. That’s how he rolls.
Can you wear tie-dye to the December ceremony? Do they make tie-dye tuxedos?
Speaking of tie-dye, some guy just brushed by me, whispering “ice-cold Econometricas.”
It’s definitely getting a little grungy out here as the crowd begins to gather. Some people, probably grad students, just rolled out of a VW bus.
They’re apparently doing the whole Nobel tour, they just drove from Norway where the Peace prize was announced. I don’t think they have tickets.
Oh wait its a Saab.
Ok while we are waiting here’s some Nobel trivia for you. Did you know that Alfred Nobel’s original instruction was to give the Prize for research published in the previous year?
They stopped doing that when one of the recipients’ work was discredited after already winning the prize. Now they wait like 10 years before giving the prize.
A little different in economics of course because the publication process already takes 10 years. The work is already established by the time it is published.
Which means of course it is sure to be discredited already by then.
Hey I can hear music inside the building, I think its getting ready to start. Is that John Mayer?
I must tell you I am feeling a little out of sorts. The ligonberries I had for breakfast are messing with me.
Ok we are entering the auditorium!
The crowd is enourmous. There is a definite rumble as the excitement starts to build.
The guy next to me just handed me a bag of peanuts and motioned to pass it on. I thought that was a little strange, but I did it.
Money coming the other way…
Hey this is it! The lights are dimming. There is a giant video screen.
A silhouette of a face has just been projected on the screen. Could that be the 2010 Nobel laureate?
I just noticed the guy next to me is waving a pennant and eating cotton candy. Where did he get that?
The feeling of suspense here is overwhelming. People are actually whistling and cheering. They are doing the wave! There are fireworks in the auditorium.
A streaker! Stenciled across his posterior: “Backward Induction.” Not sure what that means. Must be a physicist, still disgruntled about the whole Economics Nobel thing.
Ernst Fehr just tackled him!
OK, the distraction is over, now the light is blazing white hot from the big screen. Everybody is squinting and shielding their eyes as they try to make out the face.
It’s coming into focus! And now there’s a name below the face.
It seems like the louder the crowd roars the clearer the image becomes. It’s deafening now.
Some people are exploding into cheers, they must be able to make out the name. Ahh… I can see it now. The 2010 Nobel Laureate in Economics is…
Carl Yastrzemski!!!!
?!
Definitely a surprise pick by the Nobel committee! People are scrambling for their laptops to file their reports. This is huge!
I must say I was not expecting this. I am supposed to write a summary of the work and I really am caught off guard by this one.
And here I was worried that I would have a hard time figuring out Dick Thaler’s contribution to economics, sheesh what am I gonna do with Carl Yastrzemski??
I think he may be the first to win the Triple Crown and the Economics Nobel.
OK, its time for the big phone call. On the big screen now you see they are dialing and waiting. The phone is ringing.
Hey my phone is ringing! Gonna answer it.
It’s Sandeep! “Hey Sandeep you are never going to believe this!”
S: “Jeff, did you forget you were going to blog the Nobel announcement?”
J: “No man, I am here, I am on it. Carl Yastrzemski! Can you believe that?”
S: “What are you talking about? Look at the clock man, you overslept. You missed the whole thing. Diamond, Mortensen, and Pissarides got it, exactly as I predicted last week. This is a big day for Northwestern and you missed it! Who the hell is Carl Yastrzemski?”
J: “Overslept? Huh? Wait, I was there. Where am I? Oh man, I thought I set my alarm. They said 11AM.”
S: “GMT you idiot. You know what GMT means don’t you?”
J: “Give-or-take a Minute or Two?”
S: “Aiee! That’s it, I’m taking my kids out of the American public school system today. Bye”
Congratulations to the new laureates!! Way to go Dale!!
I will be live blogging the Nobel announcement. Tune in here just before 11AM GMT.
1. John Lennon at Madison Square Garden in 1972 (requires free membership).
2. Randall Grahm wine maker at Bonny Doon has an award-winning blog.
3. Eating and drinking in South Tyrol.
The District of Columbia is testing a system to allow overseas military personnel submit absentee electronic ballots via the internet. Obviously security is a major concern and the followed a suggestion often made by the security community to open the system to the public and allow white-hat hackers to try and find exploits. Here is the account of one team who participated and found a vulnerability within 36 hours.
By formatting the string in a particular way, we could cause the server to execute commands on our behalf. For example, the filename “ballot.$(sleep 10)pdf” would cause the server to pause for ten seconds (executing the “sleep 10” command) before responding. In effect, this vulnerability allowed us to remotely log in to the server as a privileged user
As a result, deployment of the system has been delayed.
This is exactly the kind of open, public testing that many of us in the e-voting security community — including me — have been encouraging vendors and municipalities to conduct.
But it could have turned out differently. If a black-hat got there first, they could fix the vulnerability after first leaving themselves a backdoor. Then the test comes out looking like a success, it goes live, and …
Here are votes from some interested subset from NU Econ and Kellogg. NU and Kellogg Management and Strategy have lots of I.O. specialists and Dale Mortensen is in the Economics Department. Plus we have lots of theorists. The closest I have to a coherent story from this data is a prize for “search theory” with Diamond, Mortensen and Pissarides.
Tyler Cowen invites us to ponder this game:
Rejection Therapy is a real life game with one rule: to be rejected by someone every single day, for 30 days consecutive. There are even suggestion cards available for “rejection attempts” (although they are not essential to the game).
I am not sure about rejection as therapy, any more than the general principle that it is therapeutic to expose yourself to new, perhaps uncomfortable experiences all the time.
But rejection is a very simple yardstick by which to judge how often and how hard you are trying, how high you are aiming. We should push those margins as far as they can go, up to the point of negative marginal returns. We have not passed that threshold until the rejection rate is positive.
So, whether or not it is an end in itself, a daily dose of rejection is the hallmark of a life lived to the fullest.
Ariel Rubinstein wrote the Afterword for the 2007 reprinting of the book that launched Game Theory as a field, von Neumann and Morgenstern’s Theory of Games and Economic Behavior. Here is a representative excerpt:
Others (including myself) think that the object of game theory is primarily to study the considerations used in decision making in interactive situations. It identifies patterns of reasoning and investigates their implications on decision making in strategic situations. Accordingto this opinion, game theory does not have normative implications and its empirical significance is very limited. Game theory is viewed as a cousin of logic. Logic does not allow us to screen out true statements from false ones and does not help us distinguish right from wrong. Game theory does not tell us which action is preferable or predict what other people will do. If game theory is nevertheless useful or practical, it is only indirectly so. In any case, the burden of proof is on those who use game theory to make policy recommendations, not on those who doubt the practical value of game theory in the first place.
And, by the way, I sometimes wonder why people are so obsessed in looking for “usefulness” in economics generally and game theory in particular. Should academic research be judged by its usefulness?
Tam o’Shanter Toss: Russ Roberts

