You are currently browsing Sandeep Baliga’s articles.
One way players might play a game is by learning over time till they reach a best response to strategies they have observed in the past. If learning converges, then a natural hypothesis due to Fudenberg and Levine , is that it settles on a self-confirming equilibrium:
Self-Confirming Equilibrium (SCE) is a relaxation of Nash equilibrium: Each player chooses a best response to his beliefs and his beliefs are “correct” on the path of play. But different players may have different beliefs over strategies off the path of play and may believe that players’ actions are correlated. Nash equilibrium (NE) requires that players’ beliefs are also correct off the path of play, that all players have the same beliefs over off the path play and that players’ strategies are independent. As the definition of Nash equilibrium puts extra constraints on beliefs, the set of Nash equilibria of a game cannot be larger than the set of self-confirming equilibria.
There is no reason why learning based on past play should tell us anything about off path play. So SCE is a more natural prediction for the outcome of learning than NE. Finally, we come to college football!
The University of Oregon football team has been pursuing an innovative “off the path” strategy:
“Oregon plays so fast that it is not uncommon for it to snap the ball 7 seconds into the 40-second play clock, long before defenses are accustomed to being set. That is so quick that opponents have no ability to substitute between plays, and fans at home do not have time to run to the fridge.”
Opposing teams on defense are just not used to playing against this strategy and have not developed a best-response. So far they have come up with an import from soccer, the old fake an injury strategy. This has yielded great moments like the YouTube video above.
I am trying to relate this football scenario to SCE. SCE does not incorporate experimentation which is what the Oregon Ducks are trying so this is immediately inconsistent with SCE. But set that aside – even without experimentation, is the status quo of slower snaps and best responses to them an SCE? I think it is and that it is even consistent with NE.
Even in a SCE of the two player sequential move game of football, the offense has to hypothesize what the defense would do if the offense plays fast. Given their conjecture about the defense’s play if the offense plays fast, it is better for the offense to play slow rather than play fast. Their conjecture about the defense’s play to fast snaps does not have to be at a best response for the defense as this node is unreached. And the defense plays a best response to what they observe – slow play by the offense. So both players are at a best response and the offense’s conjecture about the defense play off the path of play can be taken to be “correct” as neither SCE nor NE put restrictions on the defense being at a best response off the path of play.
In other words, in two player games, a SCE is automatically a NE. From diagonalizing Fudenberg and Levine, it seems this that this is true if you rule out correlated strategies (but I am administering an exam as I write this so I cannot concentrate!). If I am right, the football example is consistent with SCE and hence NE. (In three (or more) player games, there can be a substantive difference between SCE and NE as different players can have different conjectures on off path play in SCE but not NE and this can turn out to be important.)
But the football experience is not necessarily a Subgame Perfect Equilibrium. This adds the requirement of sequential rationality to Nash equilibrium: Each player’s strategy at all decision nodes, even those off the path of play, has to be a best response to his beliefs and beliefs have to be correct etc. So, it may be that football teams on offense have been assuming there is some devastating loss to playing fast. First, it is simply hard to play fast and perhaps they thought it was easy to defend fast snaps. But since this was never really tested, no-one really knew it for a fact.
Now the Oregon Ducks are experimenting and their opponents are trying to find a best response. So far they have come up with faking injuries. Eventually they will find a best response. Then and only then will the teams learn whether it is better for the offense to have fast snaps or slow snaps. And then they will play subgame perfect equilibrium: the offense may switch back to slow snaps if the best response to fast snaps if sufficiently devastating.
Part I of BBC program:
Obama’s favorite strategy is to be middle-of-the-road: don’t side with either Democrats or Republicans and hope both sides support a compromise plan. To do this effectively, it is important not to reveal your true preferences and maintain “strategic ambiguity” as clarity risks alienating one audience or the other. But if it is obvious that one side is not going to support you, you have to show your hand to get the other audience to buy in. Otherwise, no-one will support your policy and it will never pass. I mentioned that Obama already had to do this earlier this year with healthcare reform. Now an interesting article by Matt Bai points out that he faces a similar dilemma over debt reduction.
Mr. Obama has almost invariably sought to position himself halfway between traditionalism and reform, just as his vague notions of “hope” and “change” during the 2008 campaign were meant to appeal simultaneously to both disaffected independent voters and core progressives. And in virtually every case, he has satisfied pretty much no one….Since he isn’t willing to break publicly with liberals, independent and conservative voters tend to see him as a tool of the left. And since he generally won’t do exactly what the left wants him to do, he ends up with very little gratitude from his own party.
This political no-man’s land, however, is about to become uninhabitable. The national debt is near the top of any list of voter concerns at the moment, and when his commission votes Friday on its final recommendations, Mr. Obama will be handed concrete and contrasting options for addressing it.
I’m trying to lose the extra Robiola-weight I acquired last year by all too frequent visits to Formaggio Kitchen. Jogging is boring and it is too cold for cycling so I have started to attend spinning classes. (Spinning is basically cycling on a stationary bike which has a weighted flywheel.) Today, the instructor split the nine of us in class into three teams of three and had us play a game.
At first, she said at least one team had to be standing and at least one sitting at any point in time. If this condition was not met, the instructor would choose one team and single them out for punishment. The punishment involved putting huge weight on the flywheel and pedaling hard. I believe this sort of varying speed and endurance regimen in known as Fartlek exercise – that was how it was described to us.
Standing is harder work – our team has one member who was particularly reluctant to stand. There is a free-rider problem and if it cannot be resolved, there is punishment. There is a Chicken-like flavor to the game: if two teams can somehow “commit” to sit, the third’s best response is to stand to escape a 1/3 chance of a big punishment. There are multiple pure strategy equilibria and an inefficient mixed strategy one.
But the Coase Theorem applied: Each team appointed a leader who would shout “stand” or “sit” and we rotated turns standing using thirty second intervals timed using the clock on the wall of the exercise room. No inefficient punishments and intertemporal transfers.
But there were errors, largely by miscommunication within a team as people got more tired and as the instructor kept on changing the rules and confusing everyone deliberately. The punishment is meant to be directed at team that mis-coordinated but this can be hard to determine. This makes the punishment a little random. One team blames the other for the punishment and can vindictively trick the other into a false move that can get them punished. Also, coming out of the punishment and coordinating again with the other teams is hard and can lead to another punishment cycle.
So, there were inefficiencies caused by bounded rationality/miscommunication and occasional bouts of vindictiveness. But at least in our little exercise room, things worked out and the Coase Theorem applied 99% of the time. It was remarked that perhaps similar exercises might be performed before the next round of global warming negotiations to give everyone the skills to get along.
1. Wrigley Field restaurant owner to possible Afghan druglord.
2. Prince Andrew is following in his father footsteps according to this cable. Parts 13c and 14c are particularly good. The American Ambassador displays a dry sarcastic writing style which is quite engaging.
3. The New Machiavelli thinks the U.S. looks good.
4. The Guardian is quite Machiavellian.
Readers of Q.J.E. will say Josh Angrist.
Readers to Freakonomics will say Steven Levitt.
Followers of the Nobel Prize will say (or try to say!), Trygve Haavelmo.
But it turns out, they should say Philip Wright. The identification (ha ha) was done by Jim Stock and Francesco Trebbi who report:
The earliest known solution to the identification problem in econometrics – the
problem of identifying and estimating one or more coefficients of a system of
simultaneous equations – appears in Appendix B of a book written by Philip G. Wright,
The Tariff on Animal and Vegetable Oils, published in 1928. Its first 285 pages are a
painfully detailed treatise on animal and vegetable oils – their production, uses, markets,
and tariffs. Then, out of the blue, comes Appendix B: a succinct and insightful
explanation of why data on price and quantity alone are in general inadequate for
estimating either supply or demand; two separate and correct derivations of the
instrumental variables estimators of the supply and demand elasticities; and an empirical
application to butter and flaxseed.
Stock and Trebbi have to do a lot of detective work because Sewall Wright, the great geneticist, was the son of Philip and people thought the son might have written Appendix B. But Stock and Trebbi do a fun econometric analysis of the text of Appendix B and compare it to Philip and Sewall’s other work (this is called stylometrics). They find that Philip must have written Appendix B.
(HT: Enrico Spolaore and Francesco Trebbi at NBER last week)
Frank Rich thinks he might wake up to the nightmare of President Palin in 2012/3:
When Palin told Barbara Walters last week that she believed she could beat Barack Obama in 2012, it wasn’t an idle boast. Should Michael Bloomberg decide to spend billions on a quixotic run as a third-party spoiler, all bets on Obama are off.
Does Bloomberg want to make this happen? It seems not as I noted last week:
Bloomberg, who isn’t affiliated with Republicans or Democrats, said a candidate running outside the two-party system couldn’t get a majority of the 538 votes in the Electoral College, which would trigger a provision in the U.S. Constitution giving the House of Representatives power to decide the election.
“Unless you get a majority, it goes to the House,” he said today during a conference sponsored by the Wall Street Journal in Washington. “It’s going to go to the Republicans because the Republicans have just taken over the House.”
If Bloomberg wants to be President he will have to run as a Republican.
From Bloomberg (the firm not the man):
New York Mayor Michael Bloomberg, who has been mentioned as a potential U.S. presidential candidate, said he doesn’t believe an independent can win.
Bloomberg, who isn’t affiliated with Republicans or Democrats, said a candidate running outside the two-party system couldn’t get a majority of the 538 votes in the Electoral College, which would trigger a provision in the U.S. Constitution giving the House of Representatives power to decide the election.
“Unless you get a majority, it goes to the House,” he said today during a conference sponsored by the Wall Street Journal in Washington. “It’s going to go to the Republicans because the Republicans have just taken over the House.”
I guess billionaires can do backward induction. If only people who can do backward induction were billionaires….
As junior recruiting approaches, we cannot help but speculate on the optimal way to compare apples to oranges – candidates across different fields (e.g. micro vs macro) and across universities. I speculated a while ago that a “best athlete” recruiting system across fields is prone to gaming. Each field might simply claim its candidate is great. To stop that happening, you might have to live with having slots allocated to fields and/or rotating slots over time.
It turns out that Yeon-Koo Che, Wouter Dessein and Navin Kartik have thought about something much more subtle along these lines in their paper “Pandering to Persuade“. They consider both comparisons across fields and across candidates from different universities. I’m going to give a rough synopsis of the paper.
Suppose the recruiting committee in an economics department is deciding whether to hire a theorist or a labor economist. There is only one labor economist candidate and her quality is known. There are two theorists, one from University A and one from University B. The recruiting committee would like to hire a theorist if and only if his quality is higher than the labor economist’s. Also, the recruiting committee and everyone else believes that, on average, candidates from University A are better than those from University B. But of course this is only true on average. Luckily some theorists can read the paper and help fine tune the committee’s assessment of the theory candidates. They share the committee’s interest in hiring the best theorist but they are quite shallow and hence uninterested in research outside their own field. In particular, theorists do not care for labor economics and always prefer a theorist at the end of the day.
So, the recruiting committee must listen to the theorists’ recommendation with care. First, the theorists have huge incentives to exaggerate the quality of their favored candidate if this carries influence with the committee. Hence, quality evaluations cannot be trusted. All the theorists can credibly do is say which candidate is better but not by how much. But there is a further problem: if the theorists say candidate B is better, given the committee’s prior, they might think better of candidate B and yet prefer to hire the labor economist! Being theorists, the sender(s) can do backward induction and they know the difficulty with their strategy if it is too honest. The solution is obvious to the theorists: extol the virtues of candidate A even when candidate B is a little better. Hence, in equilibrium, the candidate from the ex ante better university gets favored. But candidate B still has a shot: if they are sufficiently good, the theorists still recommend them. The committee may with some probability still go with the labor economist so it is risky to make this recommendation. But if candidate B is sufficiently good, the theorists may want to run this risk rather than push the favored candidate A. I refer you to the paper for the full equilibrium(a) but, as you can see, the paper is fun and interesting.
There are some extensions considered. In one, the authors study delegation to the theorists. Sometimes the department will lose out on a good labor economist but at least there is no incentive for the theorists to select the worst candidate. This is the giving slots to fields solution I wondered about and it is derived in this elegant model.
I haven’t spent enough time in Paris to sample Michelin recommended restaurants. When I lived in London, I didn’t have the income to go to Marco Pierre White’s Michelin-starred temple to gastronomy. Finally, my hard-earned graft has left me with a little bit of disposable income over mortgage and other expenses. And I am living near a city which was been noticed by the rotund Michelin Man. Living vicariously through the Dining section of the NYT, I always wondered whether the gripes I read about the Michelin Guide were justified. Now I can finally weigh in as they have released their list of Bib Gourmand restaurants for Chicago.
Bib Gourmand appears to mean restaurants where you can have two courses and a glass of wine or dessert for $40 or less. What is not clear whether such restaurants can also qualify for the higher ranking of one to three stars. I assume they cannot because that’s the only way I can rationalize what’s on the list: Frontera Grill, Lula Cafe on the same list as the pancake place, Anne Sather?!! La Creperie, gimme a break, is just crap. My wife and I almost walked out of Veerasway it was so bad.
Part of the problem is that the gradation of categories is too coarse. Just like the top B and the bottom B in an undergrad economics class, a lot of successes and sins are hidden in one lump. If this Chicago list is anything to go by, the Robert Parker 0-100 numerical approach to wine and the Zagat approach to restaurants may be better.
Advice before tenure is some variation around a cliché: Publish or Perish! Some universities may assess impact or make their own subjective evaluation of your work, believing that they have the taste and scientific expertise to do so. Others may have a more quantitative approach, counting papers, ranking journals and adding up citations. But if you haven’t published, basically you are going to perish.
Suppose you get tenure. You overcome your feeling of ennui and the “Is this all there is?” existential crisis. You accept the fact that you are probably going to be stuck with the same people for another 30-35 years. You publish the stuff that was in the pipeline when you came up for tenure. What do you do next?
When I have personal dilemmas of this sort, I try to find some wise women to help me out. For the post-tenure dilemma, I turned to the Committee on the Status of Women in the Economics Profession (CSWEP). Their Winter 2009 issue has lots of useful articles. This is several years after my tenure but it turns out I was instinctively following much of the advice anyway. For example, Bob Hall says in his article:
“Now that you have tenure, the number of papers you produce is amazingly irrelevant. One good paper a year
would put you at the very top of productivity. Consequently, you should generally spend your research time on the most
promising of the projects you are working on. A related principle is that you should try to maintain a lot of slack in
your time allocation, so that if a great research idea pops into your head or a great opportunity comes along in another
way—an offer of collaboration or access to a data set—you can exploit it quickly.”
He adds:
Research shows that good ideas are more likely to spring into your head when it is fuzzy and relaxed, not when you
are focused and concentrating, with caffeine at its maximum dose. Another principle is that if you get away from
a problem for a bit—say by taking a vacation or spending a weekend with your family—the answer may come to you
easily when you return to work on the problem.
Finally, Hall becomes quite practical:
To sum up, the big danger for an economist at your career stage is to get involved in so many seemingly meritorious
activities on campus, at journals, in Washington, at conferences, writing textbooks, serving clients, and the like, that
your life becomes crowded and you feel hassled. Worst of all, you find yourself starved of time for creative research. When
this happens, take out a piece of paper and write down all of the activities that fill your work day and decide which ones
to cross off. This sounds like trite self-help book advice, but it works.
His whole article is here and is definitely worth a read. One thing I would say he misses: Hall is at Stanford, a top research university. Perhaps, universities below that hallowed standard are still quantity oriented as they cannot judge quality – journalism or the endless re-labeling of the same idea again and again might be mistaken for fundamental research and lead to a pay rise or internal status. You might still just ignore that and go with Hall’s advice.
The blog is definitely helping both Jeff and me stay fuzzy and relaxed, as you can tell from our posts. But I have to sign off now, go make a list and cross some other things off….
We have a new guest-blogger: Roger Myerson.
Roger is a game theorist but his work is known to everyone – theorist or otherwise – who has done graduate work in economics. If an economist from the late nineteenth century, like Edgeworth, or early twentieth century, like Marshall, wakes up and asks, “What’s new in economics since my time?”, I guess one answer is, “Information Economics”.
Is the investment bank trying to sell me a security that it is trying to dump or is it a good investment? Is a bank’s employee screening borrowers carefully before he makes mortgage loans? Does the insurance company have enough reserves to cover its policies if many of them go bad at the same time? All these topical situations are characterized by asymmetric information: One party knows some information or is taking an action that is not observable to a trading partner.
While the classical economists certainly discussed information, they did not think about it systematically. At the very least, we have to get into the nitty-gritty of how an economic agent’s allocation varies with his actions and his information to study the impact of asymmetric information. And perfect competition with its focus on prices and quantities is not a natural paradigm for studying these kind of issues. But if we open the Pandora’s Box of production and exchange to study allocation systems broader than perfect competition, how are we even going to be able to sort through the infinite possibilities that appear? And how are we going to determine the best way to deal with the constraints imposed by asymmetric information?
These questions were answered by the field of mechanism design to which Roger Myerson made major contributions. If an allocation of resources is achievable by any allocation system (or mechanism), then it can be achieved by a “direct revelation game” DRG where agents are given the incentive to report their information honestly, told actions to take and then given the incentives to follow orders. To get an agent to tell you his information, you may have to pay him “information rent”. To get an agent to take an action, you may have to pay him a kind of bonus for performing well, “an efficiency wage”. But these payments are unavoidable – if you have to pay them in a DRG, you have to pay them (or more!) in any other mechanism. All this is quite abstract, but it has practical applications. Roger used these techniques to show that the kind of simple auctions we see in the real world in fact maximize expected profits for the seller in certain circumstances, even though they leave information rents to the winner. These rents must be paid in a DRG and hence if an auction leaves exactly these rents to the buyers, the seller cannot do any better.
For this work and more, he won the Nobel Memorial Prize in Economics in 2007 with Leo Hurwicz and Eric Maskin. Recently, Jeff mentioned that Roger and Mark Satterthwaite should get a second Nobel for the Myerson-Satterthwaite Theorem which identifies environments where it is impossible to achieve efficient allocations because agents have to be paid information rents to reveal their information honestly. This work also uses the framework and DRG I have described above.
Over time, Roger has become more of an “applied theorist”. That is a fuzzy term that means different things to different people. To me, it means that a researcher begins by looking at an issue in the world and writes down a model to understand it and say something interesting about it. Roger now thinks about how to build a system of government from scratch or about the causes of the financial crisis. How do we make sure leaders and their henchmen behave themselves and don’t try to extract more than minimum rents? How can incentives of investment advisors generate credit cycles?
These questions are important and obviously motivated by political and economic events. The first question belongs to “political economy” and hints at Roger’s interests in political science. More broadly, Roger is now interested in all sorts of policy questions, in economics and domestic and foreign policy.
Jeff and I are very happy to have him as a guest blogger. We hope he finds it easy and fun and the blog provides him with a path to get his analyses and opinions into the public domain. We hope he becomes a permanent member of the blog. So, if among the posts about masturbation and Charlie Sheen’s marital problems you find a post about “What should be done in Afghanistan”, you’ll know who wrote it.
Welcome Roger!
Fresh from their rout of the Democrats, the G.O.P. are promising to repeal President Obama’s healthcare reform. There are lots of things that can be improved in the law but there are also some features that will be popular with voters. To think about the risks of repeal, I find it useful to recall my favorite Rumsfeld quote:
“[A]s we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.”
In the minds on most voters, the healthcare law in an unknown unknown – most of its provisions have not gone into effect and non-experts (and even experts?) have not read the bill fully. There was never a really serious discussion of the law in the main stream media so citizens just know various buzz words (“death panels”). The huge uncertainty made the law unpopular with voters and it gave Republicans an electoral advantage.
If the Republicans clamor to repeal the law, the Democrats can point to he features of the law that will be popular with voters (no denial of coverage for pre-existing conditions, kids can stay on their parents’ policies till they are twenty six..). The law will go from being an unknown unknown into known unknown/known known territory. There is only upside for Democrats from this change – uncertainty-averse voters can’t have a worse impression of the healthcare reform than they already have. They are judging the unknown unknown in its worst possible light. The risk for Republicans is that if voters find some parts of the law appealing, their assessment actually improves and the Republicans’ electoral advantage diminishes. Better to keep the details hidden.
There is a “middle of the road” strategy – repeal unpopular bits and keep the popular ones. I’m not sure if this is really viable either. It forces a serious discussion of the law. For example, if Republicans try to repeal the requirement to buy insurance coverage, there will be some discussion of the subsidies offered and the costs of getting rid of the provision (some people will not buy insurance and then free-ride on emergency care pushing up costs for everyone else). Even if the discussion is a mess, it can’t be worse than the shallow discussion we sat through last year.
My guess is that the usual promise of tax cuts seems to dominate healthcare repeal as a strategy for Republicans.
Another celebrity marriage has hit the rocks: Charlie Sheen and Brooke Mueller have filed for divorce. As sad as this is, we economists can tell that Charlie and/or his lawyers are good negotiators. His divorce filing (page 27) requires that:
“In no event and under circumstances shall the child support paid by Charlie for Bob and Max [his kids with Brooke] be less than the child support paid by Charlie to Denise Richards for Sam and Lola…… The sum received by Denise for child support shall be the minimum received by Brooke for Bob and Max.”
This is all dressed with the claim that Denise has higher earning potential than Brooke. But so what? Brooke and her kids are in great shape even if Denise and her kids are in better shape. Envy has to be playing a role. But more cynically, Charlie benefits: If Denise asks for more money, he can refuse saying he’d also have to pay more to Brooke and he simply cannot afford it, given his expensive watch buying habit. Also, Brooke is going to negotiate less hard for more money in the future, hoping Denise does the hard work.
So that brings us to Denise – I am not a legal expert but is it O.K. to mention a contract with Denise in a contract with Brooke without Denise agreeing to it? Denise should really sue to have this clause removed. It definitely hurts her in future negotiations. Denise, if you need an expert witness, I am right here.
I was taught the example below as an undergrad and I think it may originally be due to Pigou (or perhaps, as you will see, it was made up by British Rail):
Suppose travelers can either drive to get from A to B or take the train. The road is free and tolls are technologically infeasible. Using the train requires buying a ticket. In fact, the railways are nationalized and, in a narrow definition of the first-best, they should price at marginal cost (this is a pre-privatization example!). This means too many people will drive, imposing costs on each other and speeding up depreciation of the road. One solution: subsidize train tickets and price them below cost. One departure from optimality – the inability to charge a toll on the road – leads to another – pricing below cost on the train. The theory of the second-best.
Now let’s turn to torture in medieval Continental Europe. The law was administered by professional judges and sentences often involved maiming or death. As the punishment was so severe, the proof had to be overwhelming: Conviction required the testimony of two eyewitnesses to the crime. No circumstantial evidence was allowed. Let us take the two eyewitness rule as a constraint, like the no toll rule for the road.
What can the judge do if there are less than two eyewitnesses but there is circumstantial evidence that the prisoner is guilty? The judge is then allowed to use torture to extract a confession. To prevent false confessions, the prisoner has to give up details only someone who is guilty might know. Dealing with one constraint, the two eyewitness rule, leads to the use of torture, presumably not the best way to introduce circumstantial evidence into judicial proceedings.
There was much room for abuse (leading the prisoner during torture so they know details of the crime etc). Torture was eventually outlawed in judicial proceedings. There are many theories for why. The most prosaic suggests that with incarceration replacing maiming as a tool for punishment, standards for proof could be relaxed. In England, torture was not used as trial was by jury. A jury at that time (or even now?) could take any old fact, circumstantial or not, into account in their deliberations. No need to torture a prisoner when you can convict him on whim anyway!
My source: The Legal History of Torture, John H. Langbein in Torture: A Collection.
According to Reuters:
British Airways (BA) could be fined up to 80 million euros (71 million pounds) next month for fixing cargo prices with other carriers, a source with direct knowledge of the case said on Tuesday.
A number of firms played “cooperate”:
The European Commission charged BA, Air France-KLM, SAS and several other airlines in December 2007 with taking part in an air freight cartel.
One played “defect”:
Lufthansa previously said it had immunity as it alerted the Commission to the cartel.
(Hat Tip: Amit Patel)
Eric Budish and Estelle Cantillon study this issue in a recent paper. Suppose any mechanism that allocates students to slots in courses must be ex post efficient – i.e. there should be no room for making trades that make all students better off ex post – and that it is must be a dominant strategy for students to report their preferences truthfully (strategyproofness). It is well-known that the only mechanism that satisfies these two properties is random serial dictatorship (RSD): students are randomly assigned in order and each student gets to pick his entire course schedule in order.
In practice, at HBS, students choose one course at a time. This cannot satisfy ex post efficiency and strategy-proofness and students must lie in equilibrium. How do they lie and what welfare implications does it have relative to RSD? Budish and Cantillon are able to answer this question because they have two sets of data: (1) they have data on how students actually played the mechanism and (2) they have data on the students’ true preferences from a survey. The second set of data is rarely available.
First, by comparing actual play with reported preference they can see how students game the system. Second, by using the reported preferences in the survey they can simulate what would have happened in the RSD. They can then compare the RSD outcome with the outcome in the mechanism used and make judgements about welfare.
Their results: Students gamed the system to take slots in popular courses. For example, even if a popular course ranked low in a student’s ranking, he signed up for the course at the earliest opportunity fearing it would be gone if he waited. This causes congestion and popular courses fill up quickly and people who value the course highly may not get in. Moreover, by the time the student bids on less popular courses, that course may be full even though it is high in his own ranking. So, there are two inefficiencies. But still the HBS mechanism can dominate the RSD in terms of ex ante welfare. If a student is picked late and really values some course highly it may already be gone by the time they get to pick. At least the HBS mechanism gives them some change to get some of their picks in early. Despite the gaming, this can increase welfare ex ante.
This is a cool paper. Some predictions of theory are borne out which is always nice. More importantly, the welfare comparisons using real data make the paper quite original. Perhaps there is some room to tinker with the HBS mechanism and tis might lead to other insights.
As someone who knows more about Bayesian mechanism design than the strategyproofness literature, I am still trying to grope my way towards some unification of these approaches. It seems one should start off with some welfare criterion and some solution concept and maximize welfare subject to incentive constraints which depend on the solution concept. If the solution concept is dominant strategy equilibrium, then this leads to dominant strategy incentive constraints. This may not lead to ex post efficient allocations (see Jeff’s timely post about Myerson-Satterthwaite!) but it can improve ex ante welfare. There is a large literature on Bayesian mechanism design; there is large literature on revenue equivalence. But even in the latter case, the mechanism designer has a belief. I guess this implies there is some room for dominant strategy mechanism design without a prior which I suppose is robust mechanism design. Has this been approach been applied to market design? My impression is that it has not…..
The seminal (economist’s!) answer to this question has been offered by my old teacher in grad school and my colleague till a few years ago, Kathy Spier, in her paper “Incomplete Contracts and Signaling”. As her title suggests, her core idea is based on signaling: an informed party making an offer in a game signals his private information via the offer. An offer that carries a negative inference may not be made. Kathy’s model is quite complex but it’s central logic is captured in a passage from her paper:
A fellow might hesitate to ask his fiancée to sign a prenuptial agreement…. because to do so would lead her to believe that the quality of the marriage – or the probability of divorce – are higher than she had thought.
In the new century, roles are reversed – the wealthy partner might be female and the poor one male. If there is no pre-nup, the man can extract a large fraction of his ex-wife’s wealth after a divorce. In that situation, to signal his love, the man should offer to sign a pre-nup that gives him none of his ex-wife’s fortune. If he is confident the marriage will survive, divorce is impossible anyway , so why worry about income in an impossible event?
Alas, as the poets have long told us, the path of true love does not run smooth – the most well-intentioned and loving couple can find their marriage has hit the rocks. Then, there will be much regret and perhaps desperate, legal action to extract enough cash to live in the style to which one has become accustomed.
And so I turn finally to this sad case in the British courts:
When Katrin Radmacher and Nicolas Granatino married in 1998, she insisted it had been for love, not for money. That was why the wealthy German heiress had ensured that her banker husband signed a prenuptial agreement promising to make no claims on her fortune if the marriage failed. It was, she said, “a way of proving you are marrying only for love”.
Once the love had gone, however – the couple separated in 2006 – the fortune remained, and Granatino, by then a mature student at Oxford, decided to challenge the prenup, which they had signed in Germany before marrying and divorcing in Britain, arguing it had no status in English law.
But Granatino lost.
I’m sure a research paper can come out of this: two-sided incomplete information, two-sided signaling and optimal contracting…..I’m too busy keeping my marriage alive to have the time to write it.
The Justice Department is suing Blue Cross Blue Shield of Michigan for its use of “most favored nation” clauses in contracts with hospitals:
Blue Cross and Blue Shield, like most insurers, contracts with hospitals, doctors, labs and other providers for services. The lawsuit took direct aim at contract clauses stipulating that no insurance companies could obtain better rates from the providers than Blue Cross. Some of these contract provisions, known as “most favored nation” clauses, require hospitals to charge other insurers a specified percentage more than they charge Blue Cross — in some cases, 30 to 40 percent more, the lawsuit said.
This kind of contract has several effects. Most obviously, it deters entry by other health insurance companies that automatically face higher costs than BCBS Michigan because of the most favored nation clause. BCBS then has more monopoly power and can charge higher prices for its products. Second, a competitor is going to have a hard time negotiating a low price with a hospital as any low price they negotiate also has to be passed on to BCBS to maintain the 40% difference. The hospital is in effect giving a double discount and is less likely to accept such a large cut in profits on a large BCBS contract to get incremental business from a small health insurance company. Third, BCBS has less incentive to negotiate lower prices for itself: its competitors prices are automatically higher anyway. BCBS is even willing to pay higher prices to hospitals to get the most favored nation clause put in:
The lawsuit also asserts that Blue Cross, in effect, bought protection from competition — by agreeing to pay higher prices to certain hospitals to induce them to agree to the “most favored nation” clauses.
Lots of interesting stuff. I’m looking forward to following this case to see what happens.
Our water pipes are old in Evanston and need replacing. What does that mean?…….More taxes of course!
But the Water Division of the City of Evanston is managed by sharp dedicated operators who know a thing or two about running a business.
First, they understand good PR. They are running tours of the water treatment facility just north of the NU campus on Lincoln and Sheridan. Our kids wanted to go of course. I thought the whole group traipsing through would be made up of parents and kids. But no! We had retirees asking very technical questions, interested citizens wondering about water purity and funny smells caused by dead algae and a middle-aged couple out on a date. We were all charmed by the head of Evanston Water and his entourage and quite convinced by his arguments for higher taxes.
But he also has another plan. Many of the northern suburbs get their water either from Chicago or Evanston and are tied into long term contracts. As the contracts expire, Evanston and Chicago will compete for business and Evanston is at an advantage. Our cost of production is slightly higher – Chicago can generate greater economies of scale. But Chicago has more large consumers. If it gives one of them a discount, say Morton Grove, then others will clamour for the same deal. Evanston does not face this issue to the same extent. We can safely undercut Chicago’s price to Morton Grove without passing on the same discount to other towns we supply. We have a good old competitive advantage. Then, we can use the profits to upgrade our water system without raising taxes so much.
Another interesting tidbit: Even if we sell water to Skokie at 60c per unit, they charge much more to their citizens. They add a Skokie margin onto the margin Evanston is already charging them: so called double-marginalization. Demand for water is inelastic so double marginalization does not have much effect. But if it were elastic, then prices would be lower and consumer welfare and even profits higher if the Skokie and Evanston Utilities merge and set prices like an integrated monopolist. (This is ignoring the fact that prices are regulated in some way and I do not have the details of exactly how.) Perhaps there are other examples like this where one area sells a public good to another where double marginalization is more important.
The prize has been awarded to Peter Diamond, Dale Mortensen and Chris Pissarides.
First, let me bask in some Nobel glory and say “I called it!”: in a post last week, I used the Kellogg/NU data to predict this prize. This goes to show that “information aggregation and voting” has one data point in its support.
Dale Mortensen is at Northwestern so I know him a little. I remember having a very fun conversation with Dale and his wife and Ed Prescott (before he won the Nobel Prize himself) at a Schwartz dinner at Northwestern. There was a quite lively discussion of the Iraq war led by Dale’s wife. I’ve never met Chris Pissarides. All I can say as someone raised in the U.K. is that it’s great that a professor at L.S.E. won the Prize. L.S.E. is an amazing intellectual, cosmopolitan institution. Hayek and Coase spent formative years there. It’s wonderful that Pissarides got his PhD at LSE and has spent almost his entire career there. I visited MIT last year but I was too intimidated by Diamond to strike up a conversation!
Here is my attempt to offer some explanation of some papers. My choices are somewhat idiosyncratic as they are the papers I have read rather than perhaps their key papers.
Peter Diamond has a classic paper A Model of Price Adjustment in the Journal of Economic Theory in 1971. Diamond shows that even an infinitesimal search cost can lead to monopoly pricing rather than competitive pricing because of a hold up problem. Suppose there is no search cost and two firms are selling an identical good. The logic of (Bertrand) competition means they will both end up pricing at cost. At any higher price, one firm can undercut the other and capture the entire demand rather than half the demand and double its profit.
Instead suppose there is a small search cost e>0 a consumer must pay to discover the price. Pricing at cost is no longer an equilibrium – one firm can raise its price by almost e. The consumer discovers the higher price once he enters the store. But going to the other store to get a lower price involves a transactions cost of e anyway. So, it is better to submit to hold-up and pay the higher price. This logic obtains at all prices lower than the monopoly price. At that point you do not want to raise the price any more as consumers simply stop buying at a rate than makes further price increases lead to lower profits. So, a small search cost reverses the intuition about pricing completely.
Diamond has made seminal contributions to many areas. He has worked on general equilibrium with incomplete markets, the overlapping-generations model and on public finance (Diamond-Mirrlees).
Of Dale Mortensen’s papers, I know Property Rights and Efficiency in Mating, Racing and Related Games in the American Economic Review in 1982. Suppose parties are trading and have to invest ex ante to increase the ex post value of trade. The investment could be search for a trading partner or R-and-D investment etc. If they do not trade, each goes back into the search market to trade with someone else. If they do trade, any surplus they generate is split 50-50. The latter property implies there a kind of tax on ex ante investment and generates underinvestment. In common with Diamond, there is not only search but also ex post hold-up. In Diamond, the price can be increased by the firm behind the veil of secrecy. In Mortensen, ex post haggling over price generates hold-up. The Mortensen model in AER is closely related to the Grossman-Hart model of incomplete contracts and property right allocation and also to last year’s prize to Oliver Williamson.
Pissarides’s AER 1985 paper Short-Run Equilibrium Dynamics of Unemployment, Vacancies, and Real Wages also assumes unemployed workers and firms bargain over wages ex post. Their share of the surplus depends on their outside option which is turn depends on the tightness of the labor market – intuitively, the more workers are unemployed, the lower the wage firms can charge. In fact, in the simple model Pissarides proposes, it is possible to derive explicit solutions relating unemployment to wages and vacancies and even take the model to data. Hence, the Diamond-Mortensen-Pissarides model has become a canonical model with which to study unemployment. It has been extended in many directions by many authors including Mortensen and Pissarides together.
I guess the DMP model is being used to study unemployment dynamics in the current recession and to propose policy responses. A highly timely prize.
1. John Lennon at Madison Square Garden in 1972 (requires free membership).
2. Randall Grahm wine maker at Bonny Doon has an award-winning blog.
3. Eating and drinking in South Tyrol.
Here are votes from some interested subset from NU Econ and Kellogg. NU and Kellogg Management and Strategy have lots of I.O. specialists and Dale Mortensen is in the Economics Department. Plus we have lots of theorists. The closest I have to a coherent story from this data is a prize for “search theory” with Diamond, Mortensen and Pissarides.
If a tree falls in a forest and no one is around to hear it, does it make a sound?
This old philosophical conundrum can be mapped into the dilemma facing the aging academic:
If I publish a paper and nobody reads it, teaches it or cites it, can it ever be a truly great paper?
As with all questions with no Platonic certitude, economists say: Let the market speak and tell us the answer.
Glenn Ellison has studied a more serious version of my question in his paper “How Does the Market Use Citation Data? The Hirsch Index in Economics.” The Hirsch index for an author is the highest number h such that the author has h papers with at least h citations. So, an index of 5 means you have five papers with at least five citations and that you do not have six papers with at least six citations etc.
Glenn points out that the Hirsch index doesn’t do a great job at ranking economists. Nobel prize winner Roger Myerson’s Hirsch index is a mere 32. But he has a few papers with over a thousand citations. Seminal papers in economics tend to get a huge number of citations but most only get a few. So, the plain vanilla Hirsch index needs to be re-evaluated.
Glenn turns to the market to guide his measure. He studies an index of the form h is the highest number such that the author has at least h papers with at least a times h to the power b citations. The plain vanilla Hirsch index sets a=b=1. Glenn estimates a and b in various ways. In one method, he looks at the NRC department rankings and finds the variables a and b that best predict the NRC rank of a (young) economist’s department. To cut a long story short, a=5 and b=2 come out as the best predictors. With this estimation in hand, we can perform various comparisons – Which fields are highly cited? Which economists are highly cited? Etc..
Here are some tasty morsels of information. International finance, trade and behavioral economics are highly cited fields (Table 6). Micro theory and cross-sectional econometrics are the worst and IO does not do too well either. These facts mean Yale and NU, which are strong in these three areas, are under-cited economics departments. But basically one gets the picture that an economists citations are closely connected to the rank of the university where s/he is employed.
Ranking young economists, it is pretty obvious who is going to come out on top: Daron Acemoglu with an index of 7.84 (Table 7). This means Daron has 7.84 papers with roughly 300 citations. Ed Glaeser and Chad Jones are close behind. Once you adjust by field, more theorists start to rank highly: Glenn, Ilya Segal, Stephen Morris and Susan Athey pop up. Also, my friend Aviv Nevo gets a shout out as an underplaced guy.
A few comments:
Most of these people are tenured well before their citations go crazy. Expert opinion not data-mining leads to their tenure. This tells you how well expert opinion predicts citations. Also, to the extent that citations take time, expert opinion will always play a role in tenure decisions. There is a difference between external opinion and internal opinion. The same few people always get asked to write letters and they will do a good job. But internal opinions may be more noisy and depend on the quality of the department. Then, Glenn’s field-adjusted citation measure gives you some idea of a candidate’s quality and might be a valuable input into the tenure decision.
Finally, there are citations and citations. A paper getting regular cites in top journals is better than a paper getting cites in lower tier journals. This can be dealt with by improving the citation index.
At another extreme, some papers may be journalistic, not academic, and then their citations mean less. For example, Malcom Gladwell gets high citations for the Tipping Point but he did not do any of the original scientific research on which his book is based. Of course he writes wonderfully and comes up with amazing examples and he is clearly an intellectual. I bet Harvard would love to have him an as an adjunct professor but they will not give him a tenured professorship.
Despite these caveats, the generalized Hirsch index is an interesting input for academic decision-making.
Ray Fisman at Slate describes a new paper by David Card, Alexandre Mas, Enrico Moretti and Emanual Saez . These researchers use publicly available salary data for the University of California to study whether workers are disgruntled when they learn they are earning less than their colleagues. The Sacramento Bee has a website which allows anyone to search for salaries by name or institution. The researchers told some employees about the website so they could search for information about their colleagues’ salaries. They then asked all employees about their job satisfaction. Comparing the groups gives an estimate of the impact of knowing salaries on job satisfaction. Fisman reports:
On average, receiving SacBee information via e-mail had little effect on job satisfaction or job-search plans. But when the researchers divided the sample in half—those above the median pay level for comparable individuals in their department and those below—they found low earners were significantly more likely to report low job and wage satisfaction if they received the SacBee e-mail. The SacBee e-mail had an even greater impact on the likelihood of low wage earners responding that they would be looking for a job in the coming year. (One respondent even sent a note to the researchers letting them know that he handed in his resignation shortly after checking his colleagues’ salaries on the SacBee Web site.) Surprisingly, high earners didn’t revel in their relative superiority—exposure to the SacBee Web site had no effect on their job satisfaction or likelihood of looking for a new job. (The researchers also found that both low and high earners expressed greater concern for income inequality in America after poking around the SacBee’s salary database.)
I’ve dressed up this post to look intellectual but really I know and you know that we want the juicy stuff:
1. Fisman’s link to the SacBee website is here.
2. What about other universities? Dan Hamermesh’s Gossip Files provide some more information….
A California Classic. Totally delicious. Three distinct phases of the palette. Fruit at first: Strawberry but also Cherry, like a Burgundy. That leaves to be replaced by acid and then finally a long, spicy finish. Lots of Clove and Pepper.
Wish it was a bit lower in alcohol – I hard a hard time waking up this morning!
I believe this wine was one of the original Rhone Rangers – California’s attempt to replicate Rhone style wines. It’s way cheaper ($30 approx.) than a good, reliable CdP and there is considerable complexity and none of the usual fruity, oaky availability of usual California style wines.
Watch the ad and then go to 1.10 minutes for the Simpsons’ prediction….



