You are currently browsing the tag archive for the ‘incentives’ tag.
An important role of government is to provide public goods that cannot be provided via private markets. There are many ways to express this view theoretically, a famous one using modern theory is Mailath-Postlewaite. (Here is a simple exposition.) They consider a public good that potentially benefits many individuals and can be provided at a fixed per-capita cost C. (So this is a public good whose cost scales proportionally with the size of the population.)
Whatever institution is supposed to supply this public good faces the problem of determining whether the sum of all individual’s values exceeds the cost. But how do you find out individual’s values? Without government intervention the best you can do is ask them to put their money where their mouths are. But this turns out to be hopelessly inefficient. For example if everybody is expected to pay (at least) an equal share of the cost, then the good will produced only if every single individual has a willingness to pay of at least C. The probability that happens shrinks to zero exponentially fast as the population grows. And in fact you can’t do much better than have everyone pay an equal share.
Government can help because it has the power to tax. We don’t have to rely on voluntary contributions to raise enough to cover the costs of the good. (In the language of mechanism design, the government can violate individual rationality.) But compulsory contributions don’t amount to a free lunch: if you are forced to pay you have no incentive to truthfully express your true value for the public good. So government provision of public goods helps with one problem but exacerbates another. For example if the policy is to tax everyone then nobody gives reliable information about their value and the best government can do is to compare the cost with the expected total value. This policy is better than nothing but it will often be inefficient since the actual values may be very different.
But government can use hybrid schemes too. For example, we could pick a representative group in the population and have them make voluntary contributions to the public good, signaling their value. Then, if enough of them have signaled a high willingness to pay, we produce the good and tax everyone else an equal share of the residual cost. This way we get some information revelation but not so much that the Mailath Postlewaite conclusion kicks in.
Indeed it is possible to get very close to the ideal mechanism with an extreme version of this. You set aside a single individual and then ask everyone else to announce their value for the public good. If the total of these values exceeds the cost you produce the public good and then charge them their Vickrey-Clarke-Groves (VCG) tax. It is well known that these taxes provide incentives for truthful revelation but that the sum of these taxes will fall short of the cost of providing the public good. Here’s where government steps in. The singled-out agent will be forced to cover the budget shortfall.
Now obviously this is bad policy and is probably infeasible anyway since the poor guy may not be able to pay that much. But the basic idea can be used in a perfectly acceptable way. The idea was that by taxing an agent we lose the ability to make use of information about his value so we want to minimize the efficiency loss associated with that. Ideally we would like to find an individual or group of individuals who are completely indifferent about the public good and tax them. Since they are indifferent we don’t need their information so we lose nothing by loading all of the tax burden on them.
In fact there is always such a group and it is a very large group: everybody who is not yet born. Since they have no information about the value of a public good provided today they are the ideal budget balancers. Today’s generation uses the efficient VCG mechanism to decide whether to produce the good and future generations are taxed to make up any budget imbalance.
There are obviously other considerations that come into play here and this is an extreme example contrived to make a point. But let me be explicit about the point. Balanced budget requirements force today’s generation to internalize all of the costs of their decisions. It is ingrained in our senses that this is the efficient way to structure incentives. For if we don’t internalize the externalities imposed on subsequent generations we will make inefficient decisions. While that is certainly true on many dimensions, it is not a universal truth. In particular public goods cannot be provided efficiently unless we offload some of the costs to the next generation.

I was walking along, and I saw just this hell of a big moose turd, I mean it was a real steamer! So I said to myself, “self, we’re going to make us some moose turd pie.” So I tipped that prairie pastry on its side, got my sh*t together, so to speak, and started rolling it down towards the cook car: flolump, flolump, flolump. I went in and made a big pie shell, and then I tipped that meadow muffin into it, laid strips of dough across it, and put a sprig of parsley on top. It was beautiful, poetry on a plate, and I served it up for dessert.
Here’s one of the thorniest incentive problems known to man. In an organization there is a job that has to be done. And not just anybody can do it well, you really need to find the guy who is best at it. The livelihood of the organization depends on it. But the job is no fun and everyone would like to get out of doing it. To make matters worse, performance is so subjective that no contract can be written to compensate the designee for a job well done.
The core conflict is exemplified in a story by Utah Phillips about railroad workers living out in the field as they work to level the track. Someone has to do the cooking for the team and nobody wants to do it. Lacking any better incentive scheme they went by the rule that if you complained about the food then from now on you were going to have to do the cooking.
You can see the problem with this arrangement. But is there any better system? You want to find the best cook but the only way to reward him is to relieve him of the job. That would be self defeating even if you could get it to work. You probably couldn’t because who would be willing to say the food was good if it meant depriving themselves of it the next time?
A simple rotation scheme at least has the benefit of removing the perverse incentive. Then on those days when the best cook has the job we can trust that he will make a good meal out of his own self interest. He might even volunteer to be the cook.
But it might be optimal to rule out volunteering too. Because that could just bring back the original incentive problem in a new form. Since ex ante nobody knows who the best cook is, everyone will set out to prove that they are incapable of making a palatable meal so that the one guy who actually can cook, whoever he is, will volunteer.
It may help to keep the identity of the cook secret. Then when a capable cook actually has the job he can feel free to make a good meal without worrying that he will be recruited permanently. It will also lower the incentive for the others to make a bad meal because nobody will know who to exclude in the future.
Even if there is no scheme that really solves the incentive problem, the freedom to complain is essential for organizational morale.
Well, this big guy come into the mess car, I mean, he’s about 5 foot forty, and he sets himself down like a fool on a stool, picked up a fork and took a big bite of that moose turd pie. Well he threw down his fork and he let out a bellow, “My God, that’s moose turd pie!”
“It’s good though.”
Believe it or not that line of thinking does lie just below the surface in many recruiting discussions. The recruiting committee wants to hire good people but because the market moves quickly it has to make many simultaneous offers and runs the risk of having too many acceptances. There is very often a real feeling that it is safe to make offers to the top people who will come with low probability but that its a real risk to make an offer to someone for whom the competition is not as strong and who is therefore likely to accept.
This is not about adverse selection or the winner’s curse. Slot-constraint considerations appear at the stage where it has already been decided which candidates we like and all that is left is to decide which ones we should offer. Anybody who has been involved in recruiting decisions has had to grapple with this conundrum.
But it really is a phantom issue. It’s just not possible to construct a plausible model under which your willingness to make an offer to a candidate is decreasing in the probability she will come. Take any model in which there is a (possibly increasing) marginal cost of filling a slot and candidates are identified by their marginal value and the probability they would accept an offer.
Consider any portfolio of offers which involves making an offer to candidate F. The value of that portfolio is a linear function of the probability that F accepts the offer. For example, consider making offers to two candidates and
. The value of this portfolio is
where and
are the acceptance probabilities,
and
are the values and
is the cost of hiring one or two candidates in total. This can be re-arranged to
where is the marginal cost of a second hire. If the bracketed expression is positive then you want to include
in the portfolio and the value of doing so only gets larger as
increases. (note to self: wordpress latex is whitespace-hating voodoo)
In particular, if is in the optimal portfolio, then that remains true when you raise
.
It’s not to say that there aren’t interesting portfolio issues involved in this problem. One issue is that worse candidates can crowd out better ones. In the example, as the probability that accepts an offer,
, increases you begin to drop others from the portfolio. Possibly even others who are better than
.
For example, suppose that the department is slot-constrained and would incur the Dean’s wrath if it hired two people this year. If so that you prefer candidate
, you will nevertheless make an offer only to
if
is very high.
In general, I guess that the optimal portfolio is a hard problem to solve. It reminds me of this paper by Hector Chade and Lones Smith. They study the problem of how many schools to apply to, but the analysis is related.
What is probably really going on when the titular quotation arises is that factions within the department disagree about the relative values of and
. If
is a theorist and
a macro-economist, the macro-economists will foresee that a high
means no offer for
.
Another observation is that Deans should not use hard offer constraints but instead expose the department to the true marginal cost curve, understanding that the department will make these calculations and voluntarily ration offers on its own. (When is not too high, it is optimal to make offers to both and a hard offer constraint prevents that.)
The Texas legislature is on the verge of passing a law permitting concealed weapons on University campuses, including the University of Texas where just this Fall my co-author Marcin Peski was holed up in his office waiting out a student who was roaming campus with an assault rifle.
This post won’t come to any conclusions, but I will try to lay out the arguments as I see them. More guns, less crime requires two assumptions. First, people will carry guns to protect themselves and second, gun-related crime will be reduced as a result.
There are two reasons that crime will be reduced: crime pays off less often, and sometimes it leads to shooting. In a perfect world, a gun-toting victim of a crime simply brandishes his gun and the criminal walks away or is apprehended and nobody gets hurt. In that perfect world the decision to carry a gun is simple. If there is any crime at all you should carry a gun becuase there are no costs and only benefits. And then the decision of criminals is simple too: crime doesn’t pay because everyone is carrying a gun.
(In equilibrium we will have a tiny bit of crime, just enough to make sure everyone still has an incentive to carry their guns.)
But the world is not perfect like that and when a gun-carrying criminal picks on a gun-carrying victim, there is a chance that either of them will be shot. This changes the incentives. Now your decision to carry a gun is a trade-off between the chance of being shot versus the cost of being the victim of a crime. The people who will now choose to carry guns are those for whom the cost of being the victim of a crime outweigh the cost of an increased chance of getting shot.
If there are such people then there will be more guns. These additional guns will reduce crime because criminals don’t want to be shot either. In equilibrium there will be a marginal concealed-weapon carrier. He’s the guy who, given the level of crime, is just indifferent between being a victim of crime and having a chance of being shot. Everyone who is more willing to escape crime and/or more willing to face the risk of being shot will carry a gun. Everyone else will not.
In this equilibrium there are more guns and less crime. On the other hand there is no theoretical reason that this is a better outcome than no guns, more crime. Because this market has externalities: there will be more gun violence. Indeed the key endogenous variable is the probability of a shootout if you carry a gun and/or commit a crime. It must be high enough to deter crime.
And there may not be much effect on crime at all. Whose elasticity with respect to increased probability of being shot is larger, the victim or the criminal? Often the criminal has less to lose. To deter crime the probability of a shooting may have to increase by more than victims are willing to accept and they may choose not to carry guns.
There is also a free-rider problem. I would rather have you carry the gun than me. So deterrence is underprovided.
Finally, you might say that things are different for crimes like mugging versus crimes like random shootings. But really the qualitative effects are the same and the only potential difference is in terms of magnitudes. And it’s not obvious which way it goes. Are random assailants more or less likely to be deterred? As for the victims, on the one hand they have more to gain from carrying a gun when they are potentially faced with a campus shooter, but if they plan make use of their gun they also face a larger chance of getting shot.
NB: nobody shot at the guy at UT in September and the only person he shot was himself.
My daughter’s 4th grade class is reading a short story by O. Henry called The Two Thanksgiving Day Gentlemen. (A two minute read.) In about an hour I will go to her class and lead a discussion of the story. Here are my notes.
In the story we meet Stuffy Pete. He is sitting on a bench waiting for a second gentleman to arrive. We learn that this is an annual meeting on Thanksgiving day that Stuffy Pete is always looking forward to. Stuffy Pete is a ragged, hungry street-dweller and the gentleman who arrives each year treats him to a Thanksgiving feast.
But on this Thanksgiving, Stuffy Pete is stuffed. Because on his way to the meeting, he was stopped by the servant of two old ladies who had their own Thanksgiving tradition. They treated him to an even bigger feast than he is used to. And so he sits here, weighed down on the bench, terrified of the impending arrival.
The old gentleman arrives and recites this speech.
“Good morning, I am glad to see that the vicissitudes of another year have spared you to move in health about the beautiful world. For that blessing alone this day of thanksgiving is well celebrated. If you will come with me, my man, I will provide you with a dinner that should be more than satisfactory in every respect.”
The same speech he has recited every year the two gentlemen met on that same bench. “The words themselves almost formed an institution.”
And Stuffy Pete, in tearful agony at the prospects replies “Thankee sir. I’ll go with ye, and much obliged. I’m very hungry sir.”
Stuffy’s Thanksgiving appetite was not his own; it now belonged to this kindly old gentleman who had taken possession of it.
The story’s deep cynicism, hinted at in the preceding quote, is only fully realized in the final paragraphs which contain the typical O. Henry ironic twist. Stuffy, overstuffed by a second Thanksgiving feast collapses and is brought to hospital by an ambulance whose driver “cursed softly at his weight.” Shortly thereafter he is joined there by the old gentleman and a doctor is overheard chatting about his case
“That nice old gentleman over there, now” he said “you wouldn’t think that was a case of almost starvation. Proud old family, I guess. He told me he hadn’t eaten a thing for three days.”
Social norms and institutions re-direct self-interested motives. Social welfare maximization is then proxied for by individual-level incentives. But they can take on a life of their own, uncoupled from their origin. This is the folk public choice theory of O. Henry’s staggeringly cynical fable.
By asking a hand-picked team of 3 or 4 experts in the field (the “peers”), journals hope to accept the good stuff, filter out the rubbish, and improve the not-quite-good-enough papers.
…Overall, they found a reliability coefficient (r^2) of 0.23, or 0.34 under a different statistical model. This is pretty low, given that 0 is random chance, while a perfect correlation would be 1.0. Using another measure of IRR, Cohen’s kappa, they found a reliability of 0.17. That means that peer reviewers only agreed on 17% more manuscripts than they would by chance alone.
That’s from neuroskeptic writing about an article that studies the peer-review process. I couldn’t tell you what Cohen’s kappa means but let’s just take the results at face value: referees disagree a lot. Is that bad news for peer-review?
Suppose that you are thinking about whether to go to a movie and you have three friends who have already seen it. You must choose in advance one or two of them to ask for a recommendation. Then after hearing their recommendation you will decide whether to see the movie.
You might decide to ask just one friend. If you do it will certainly be the case that sometimes she says thumbs-up and sometimes she says thumbs-down. But let’s be clear why. I am not assuming that your friends are unpredictable in their opinions. Indeed you may know their tastes very well. What I am saying is rather that, if you decide to ask this friend for her opinion, it must be because you don’t know it already. That is, prior to asking you cannot predict whether or not she will recommend this particular movie. Otherwise, what is the point of asking?
Now you might ask two friends for their opinions. If you do, then it must be the case that the second friend will often disagree with the first friend. Again, I am not assuming that your friends are inherently opposed in their views of movies. They may very well have similar tastes. After all they are both your friends. But, you would not bother soliciting the second opinion if you knew in advance that it was very likely to agree or disagree with the first on this particular movie. Because if you knew that then all you would have to do is ask the first friend and use her answer to infer what the second opinion would have been.
If the two referees you consult are likely to agree one way or the other, you get more information by instead dropping one of them and bringing in your third friend, assuming he is less likely to agree.
This is all to say that disagreement is not evidence that peer-review is broken. Exactly the opposite: it is a sign that editors are doing a good job picking referees and thereby making the best use of the peer-review process.
It would be very interesting to formalize this model, derive some testable implications, and bring it to data. Good data are surely easily accessible.
(Picture: Right Sizing from www.f1me.net)
In economic theory, the study of institutions falls under the general heading of mechanism design. An institution is modeled as game in which the relevant parties interact and influence the final outcome. We study how to optimally design institutions by considering how changes in the rules of the game change the way participants interact and bring about better or worse outcomes.
But when the new leaders in Egypt sit down to design a new constitution for the country, standard mechanism design will not be much help. That’s because all of mechanism design theory is premised on the assumption that the planner has in front of him a set of feasible alternatives and he is desigining the game in order to improve society’s decision over those alternatives. So it is perfectly well suited for decisions about how much a government should spend this year on all of the projects before it. But to design a constitution is to decide on procedures that will govern decisioins over alternatives that become available only in the future, and about which today’s Constitutional Congress knows nothing.
The American Constitutional Congress implicitly decided how much the United States would invest in nuclear weapons before any of them had any idea that such a thing was possible.
Designing a constitution raises a unique set of incentive problems. A great analogy is deciding on a restaurant with a group of friends. Before you start deliberating you need to know what the options are. Each of you knows about some subset of the restaurants in town and whatever procedure the group will use to ultimately decide affects whether or not you are willing to mention some of the restaurants you know about.
Ideally you would like a procedure which encourages everyone to name all the good restaurants they know about so that the group has as wide a set of choices as possible. But you can’t just indiscriminately reward people for bringing alternatives to the table because that would only lead to a long list of mostly lousy choices.
You can only expect people to suggest good restaurants if they believe that the restaurants they suggest have a chance of being chosen. And now you have to worry about strategic behavior. If I know a good Chinese restaurant but I am not in the mood for Chinese, then how are you going to reward me for bringing it up as an option?
When we think about institutions for public decisions, we have to take into account how they impact this strategic problem. Democracy may not be the best way to decide on a restaurant. If the status quo, say the Japanese restaurant is your second-favorite, you may not suggest the Mexican restaurant for fear that it will split the vote and ultimately lead to the Moroccan restaurant, your least favorite.
Certainly such political incentives affect modern day decision-making. Would a better health-care proposal have materialized were it not for fear of what it would be turned into by the political sausage mill?
Actions speak louder than words. Anarchists seeking to spread revolution resort to extreme acts hoping to stir the sympathy of the general population. Would be change-agents differ in their favored instrument of provocation – assassination, bombings or general strike. They are united by their intrinsic lack of real power. They only way they can hope to achieve their ends is by persuading other players to react and indirectly give them what they want. As such, the “propaganda of the deed” in practiced typically by people on the fringe of society, not in the corridors of power. (See my paper The Strategy of Manipulating Conflict with Tomas Sjöström for illustrations of this strategy.)
But Mubarak has reached this lowly state even as President of Egypt. He has conspicuously lost popular support and tensions long suppressed have burst asunder for all to see. He has lost the support of “the people” and, perhaps even more importantly, the army. What can he do to get it back? The anti-Mubarak protestors have till recently refrained from looting and mob mentality has been notable for its absence. As long as that remains the case, the army and the people are siding with the anti-Mubarak protestors or largely staying out of the fray. Mubarak’s only hope is to get the people and the army to pick his side. He needs to energize the mob and trigger looting. That is his strategy. Police disappeared from the streets of Cairo a few days ago, inviting looters to run amok. That did not work. So, now he has employed pro-Mubarak “supporters” to fight anti-Mubarak protestors. Open fighting on the streets of Cairo, prodding the army to step in. The people scared of the outbreak of lawlessness turning to the strongman Mubarak to return some semblance of stability to the city and the country. This is where we are in the last couple of days. Another obvious strategy for Mubarak: Get his supporters to loot and pin it on the anti-Mubarak protestors. Not sure if that is happening yet.
What can be done to subvert the Mubarak strategy? For the protestors, the advice is obvious – no looting, no breakdown of law and order. The primary audience is the army and people – keep them on your side. For the Obama administration there is little leverage over Mubarak. I assume he has hidden away millions if not billions – cutting off future aid has little chance of persuading Mubarak to do anything. Again, the army is the primary audience for the Obama administration. Whichever side they pick will win. The army cares more about the cutoff of future aid than Mubarak. They have trained in US military schools and have connections here. The only leverage the Obama administration has is over the army and it is hard to tell how strong that leverage is.
Self-Deception is a fascinating phenomenon. If you repeat a lie to yourself again and again, you start to believe it. You would think that the ability to deceive yourself would be constrained by data. If there is obviously available evidence that your story is false, you might stop believing it. Then, self-deception can only flourish when there is an identification problem. Once data falsifies competing theories, the individual is forced to face facts.
Reality is much more complex. Take the perhaps extreme case of John Edwards. The National Enquirer published a story reporting that Rielle Hunter was pregnant with John Edwards’s child. Edwards simply denied the facts. The Enquirer employed a psychologist to profile Edwards. S/he concluded:
“Edwards looks at himself as above the law. He has a compromised conscience — meaning he will cover up his immoral behavior at whatever cost to keep his reputation intact. He believes he is who his reputation says he is, rather than the immoral side, the truth. He separates himself from the immoral side because that person wouldn’t be the next president of the United States. He overcompensated for his insecurities with sex to feed his ego which feeds his narcissism.”
The most important part was the absolute certainty of the mental health professional that Edwards would continue to deny the scandal — almost at all costs.
“He will keep denying the scandal to America because he is denying the reality of it to himself. He sees himself only as the image he has created.”
How do you deal with a pathological deceiver/self-deceiver? The Editor collected photos and evidence of Hunter-Edwards liasons. He describes his strategy:
We told the press that there were photographs and video from that night. Other journalists asked us to release the images but I refused. Edwards needed to imagine the worst-case scenario becoming public. TheEnquirer would give him no clues about what it did and did not have…..
Behind the scenes we exerted pressure on Edwards, sending word though mutual contacts that we had photographed him throughout the night. We provided a few details about his movements to prove this was no bluff.
For 18 days we played this game, and as the standoff continued the Enquirer published a photograph of Edwards with the baby inside a room at the Beverly Hilton hotel.
Journalists asked if we had a hidden camera in the room. We never said yes or no. (We still haven’t). We sent word to Edwards privately that there were more photos.
He cracked. Not knowing what else the Enquirer possessed and faced with his world crumbling, Edwards, as the profiler predicted, came forward to partially confess. He knew no one could prove paternity so he admitted the affair but denied being the father of Hunter’s baby, once again taking control of the situation.
This strategy is inconsistent with the logic of extreme self-deception. Such an individual must be overconfident, thinking he can get away with bald-faced lies. Facing ambiguous evidence, he might conclude that the Enquirer had nothing beyond the odd photo it released. The Enquirer strategy instead relies on the individual believing the worst not the best. The two pathologies self-deception and extreme pessimism should cancel out…..there is some interssting inconsistency here.
One thing is clear: One way to eliminate self-deception is for a third-party to step in and make the decision. This is what Omar Suleiman, Barack Obama and the Egyptian army are doing to help Hosni Mubarak deal with his self-deception.

When your doctor points to the chart and asks you to rate your pain from 0 to 5, does your answer mean anything? In a way, yes: the more pain you are in the higher number you will report. So if last week you were 2 and this week you are 3 then she knows you are in more pain this week than last.
But she also wants to know your absolute level of pain and for that purpose the usefulness of the numerical scale is far less clear. Its unlikely that your 3 is equal in terms of painfulness to the next guy’s 3. And words wouldn’t seem to do much better. Language is just too high-level and abstract to communicate the intensity of experience.
But communication is possible. If you have driven a nail through your finger and you want to convey to someone how much pain you are in that is quite simple. All you need is a hammer and a second nail. The “speaker” can recreate the precise sensation within the listener.
Actual mutilation can be avoided if the listener has a memory of such an experience and somehow the speaker can tap into that memory. But not like this: “You remember how painful that was?” “Oh yes, that was a 4.” Instead, like this: “You remember what that felt like?” “OUCH!”
Memories of pain are more than descriptions of events. Recalling them relives the experience. And when someone who cares about you needs to know how much help you need, actually feeling how you feel is more informative than hearing a description of how you feel.
So words are at best unnecessary for that kind of communication, at worst they get in the way. All we need is some signal and some understanding of how that signal should map to a physical reaction in the “listener.” If sending that signal is a hard-wired response it’s less manipulable than speech.
Which is not to say that manipulation of empathy is altogether undesirable. Most of what entertains us exists precisely because our empathy-receptors are so easily manipulated.
Malcolm Gladwell is cynical about the ability of social media to facilitate activism:
The platforms of social media are built around weak ties. Twitter is a way of following (or being followed by) people you may never have met. Facebook is a tool for efficiently managing your acquaintances, for keeping up with the people you would not otherwise be able to stay in touch with. That’s why you can have a thousand “friends” on Facebook, as you never could in real life
If Twitter is only identifying people with weak preferences for activism, the “revolution will not be tweeted”. But there is a second countervailing effect created by network externalities, studied in Gladwell’s book The Tipping Point. An individual’s cost in participating in a revolution is s function of how many other people are involved. For example, the probability that an individual gets arrested is smaller the larger the number of people surrounding him in a demonstration. Even if Twitter in the first instance does not increase the number of people participating in a demonstration, it does create common knowledge about where they are meeting and when. The marginal participant in the absence of common knowledge strictly prefers to participate with Twitter-common-knowledge. Now more individuals will join as the demonstration has gotten a bit bigger etc. The twitting point is reached and we have a bigger chance of revolution. Now, let me go to Jeff’s twitter feed and see what he is plotting in his takeover of the NU Econ Dept.
After winning her Australian Open semi-final match against Caroline Wozniacki, Li Na was interviewed on the court. She got some laughs when she complained that she was not feeling her best because her husband’s snoring had been keeping her up the night before. Then she was asked about her motivation.
Interviewer: What got you through that third set despite not sleeping well last night?
Li Na: Prize money.
There is pressure for filibuster reform in the Senate. Passing the threshold of sixty to even hold a vote was hard in the last couple of years when the Democrats had a large majority. It’s going to be near impossible now their ranks are smaller. Changing the rules has a short run benefit – easier to get stuff passed – but a long run cost – the Republicans will use the same rules to pass their legislation when Sarah Palin is President. Taking the long view, the Democrats decided not to go this route.
By the same token, the kind delaying tactics that did not work in the lame duck session are an efficiency loss – they had little real effect on legislation but delayed the Senators taking the kind of long holidays they are used to. Some movement on delaying tactics is mutually beneficial. And so according to the NYT:
“Mr. Reid pledged that he would exercise restraint in using his power to block Republicans from trying to offer amendments on the floor, in exchange for a Republican promise to not try to erect procedural hurdles to bringing bills to the floor.
And in exchange for the Democratic leaders agreeing not to curtail filibusters by means of a simple majority vote, as some Democratic Senators had wanted to do, Senator Mitch McConnell of Kentucky, the Republican leader, said he would refrain from trying that same tactic in two years, should the Republicans gain control of the Senate in the next election.”

At this stage of the Chicago Mayoral election we have the following candidates: Rahm Emmanuel and a bunch of people who for entered a race they had almost no chance of winning and so presumably were motivated by something other than being the Mayor of Chicago. It is past the stage when new candidates can enter the race. Perhaps you dread having Rahm Emmanuel as Mayor, but at this point wouldn’t removing him be even worse? Just sayin’.
I am teaching a new PhD course this year called “Conflict and Cooperation”. The title is broad enough to include almost anything I want to teach. This is an advantage – total freedom! – but also a problem – what should I teach? The course is meant to be about environments with weak property rights where one player can achieve surplus by stealing it and not creating it. To give some structure, I have adopted Hobbes’s theories of conflict to give structure to the lectures. Hobbes says the three sources of conflict are greed, fear and honour. The solution is to have a government or Leviathan which enforces property rights.
Perhaps reputation models à la Kreps-Milgrom-Roberts-Wilson come closest to offering a game theoretic analysis of honour (e.g. altruism in the finitely repeated prisoner’s dilemma). But I will only do these if I get the time as this material is taught in many courses. So, I decided to begin with greed.
I started with the classic guns vs butter dilemma: why produce butter when you can produce guns and steal someone else’s butter? This incentive leads to two kinds of inefficiency: (1) guns are not directly productive and (2) surplus is destroyed in war waged with guns. The second inefficiency might be eliminated via transfers (the Coase Theorem in this setting). This still leaves the first inefficiency which is similar to the underinvestment result in hold-up models in the style of Grossman-Hart-Moore. With incomplete information, there can be inefficient war as well. A weak country has the incentive to pretend to be tough to extract surplus from another. If its bluff is called, there is a costly war. (Next time, I will move this material to a later lecture on asymmetric information and conflict as it does not really fit here.)
These models have bilateral conflict. If there are many players, there is room for coalitions to form, pool guns, and beat up weaker players and steal their wealth. What are stable distributions of wealth? Do they involve a dictator and/or a few superpowers? Are more equitable distributions feasible in this environment? It turns out the answer is “yes” if players are “far-sighted”. If I help a coalition beat up some other players, maybe my former coalition-mates will turn on me next. Knowing this, I should just refuse to join them in their initial foray. This can make equitable distributions of wealth stable.
I am writing up notes and slides as I am writing a book on this topic with Tomas Sjöström. Here are some slides.
From an article in the Boston Globe:
He’s a sought-after source for journalists, a guest on talk shows, and has even acquired a nickname, Dr. Doom. With the effects of the Great Recession still being keenly felt, Roubini is everywhere.
But here’s another thing about him: For a prophet, he’s wrong an awful lot of the time. In October 2008, he predicted that hundreds of hedge funds were on the verge of failure and that the government would have to close the markets for a week or two in the coming days to cope with the shock. That didn’t happen. In January 2009, he predicted that oil prices would stay below $40 for all of 2009, arguing that car companies should rev up production of gas-guzzling SUVs. By the end of the year, oil was a hair under $80, Hummer was on its way out, and automakers were tripping over themselves to develop electric cars. In March 2009, he predicted the S&P 500 would fall below 600 that year. It closed at over 1,115, up 23.5 percent year over year, the biggest single year gain since 2003.
He’s not such an outlier:
To find the answer, Denrell and Fang took predictions from July 2002 to July 2005, and calculated which economists had the best record of correctly predicting “extreme” outcomes, defined for the study as either 20 percent higher or 20 percent lower than the average prediction. They compared those to figures on the economists’ overall accuracy. What they found was striking. Economists who had a better record at calling extreme events had a worse record in general. “The analyst with the largest number as well as the highest proportion of accurate and extreme forecasts,” they wrote, “had, by far, the worst forecasting record.”
But it’s not a bad gig:
Before there was PPACA (“ObamaCare”) there was already government-provided health insurance. It’s called bankruptcy. Neale Mahoney studies the extent to which it crowds out demand for conventional health insurance.
Hospitals are required to provide emergency care and typically provide other health-care services without upfront payment. Patients who experience large unexpected health care costs have the option of avoiding some of these costs by declaring bankruptcy. Thus bankruptcy is essentially a form of high-deductible insurance where the deductible is the value of assets seizable by creditors. For many of the poorest households, “bankruptcy insurance” is an attractive substitute for health insurance.
Because states differ in the level of exempted assets, the effective deductible of bankruptcy insurance varies across states. This variation enables Mahoney to measure the extent to which changes in bankruptcy asset exemption affect households’ incentives to purchase conventional health insurance. Ideally you would like to compare two identical households, one in Deleware (friendly to creditors) and one in Rhode Island (friendly to debtors) and see how the difference in seizable assets affects their health insurance coverage. Things are never so simple so he uses some statistical finesses to deal with a variety of confounds.
The results allow him to address a number of natural questions. If every state were to adopt Delaware’s (restrictive) bankruptcy regulations, 16.3% of uninsured households would purchase insurance. For a program involving government subsidies to achieve the same increase, 44% of health insurance premiums would have to be subsidized.
From a welfare point of view, bankruptcy insurance is inefficient because the uninsured do not internalize some of the costs they impose from using bankruptcy insurance. On the other hand, because they are uninsured their providers directly bear more of the costs of care, mitigating the moral hazard inefficiency of standard insurance. With this tradeoff in mind we can ask what penalty should be imposed on those who choose not acquire private health insurance. Mahoney finds that the PPACA penalty is two large by almost a factor of two.
The WSJ Ideas Market blog has a post by Chris Shea about my forthcoming paper with David Lucca (NY Fed) and Tomas Sjöström. (Rutgers) Some excerpts:
Full democracies are unlikely to go to war with one another. That’s axiomatic in political science. Yet a new study offers an important caveat: Limited democracies may, in fact, be even more bellicose than dictatorships…….
The authors end with a twist on President George W. Bush’s contention that “the advance of freedom leads to peace”: “Unfortunately,” they say, “the data suggests that this may not be true for a limited advance of freedom.”
Here is another article in Kellogg Insight about the paper.
Did you know that in the states of Oregon and Louisiana a defendant is convicted if 10 out of 12 jurors vote guilty? In Federal trials and in all other states unanimity is required. A recent case was appealed to the Supreme Court challenging Oregon’s non-unanimous juries on 14th Ammendment grounds. On Monday the Court declined to hear the case. (Here is Eugene Volokh, who brought the petition.)
This opinion piece in the Washington Examiner argues that the unanimity requirement is essential for preserving “liberties.” I assume that what the author means is protection against convicting the innocent. Because on its face it would seem that such a mistake is less likely when unanimous agreement of all 12 jurors is required.
Of course we should care not just about the error of convicting the innocent, but also acquitting the guilty. But even if your concept of liberty puts maximum weight on the protection of the innocent, it is naive to suppose that this is achieved by unanimous juries.
Suppose you are on a jury in Oregon and the foreman has joined with 8 others who have decided to convict. Looking for the 10th vote, he turns to you. Compare your incentives to convict in this situation to the analogous situation where, in Illinois, 11 others are looking your way. With only 9 others prepared to convict there is not only less peer pressure on you, but other things equal the evidence is less persuasive. It has only convinced 9 others.
All jurors see the same evidence but each views it from his or her own perspective. When a jury votes the jurors are signaling to one another how they interpret the evidence. The more other jurors voting to convict, the stronger is your inference that the evidence shows the defendant is guilty. When you are the pivotal 10th juror you know only that 9 others have concluded that the defendant is guilty.
The lower threshold for conviction in fact makes you less likely to vote to convict.
This strategic effect of course has to be weighed against the mechanical effect of lowering the threshold and the net effect could go either way. However, there is one unambiguous sense in which unanimous jury standards are in fact the worst possible.
In a famous paper, Feddersen and Pesendorfer showed that jury voting is informationally efficient in the following sense: given enough jurors with enough independent information the strategic effect outlined above is dampened. And the defendant is convicted if and only if he is guilty.
Now in a sense this is purely of theoretical interest. Juries of 12 are not “arbitrarily large” and even if they perfectly pool their information they will make mistakes. But the point of this result is that it says that jury voting in principle works. Feddersen and Pesendorfer showed that this is true regardless of the threshold fraction of votes required for conviction, but with one single exception. Under unanimity rule the strategic effect is not dampened. Indeed with more and more jurors, knowing that you are the last holdout is stronger and stronger evidence that you should convict. Thus there is always a probability of convicting the innocent even with very large juries.
Navy Captain Owen Honors was relieved of his command of The USS Enterprise. This is the guy behind the viral videos that made the news this week.
I want to blog about the news coverage of the firing. For example, this Yahoo! News article has the headline “Navy Firing Over Videos Raises Questions Of Timing.” Here is the opening paragraph:
The Navy brusquely fired the captain of the USSEnterprise on Tuesday, more than three years after he made lewd videos to boost morale for his crew, timing that put the military under pressure to explain why it acted only after the videos became public.
Two observations:
- Sadly, it does make perfect sense to respond to his firing now by complaining that he wasn’t fired earlier. (And to complain less if he wasn’t fired at all.) The firing now reveals that his behavior crosses some line that the Navy has private information about. Now that we know he crossed that line we have good reason to ask why he wasn’t punished earlier.
- Obviously that fact implies that it is especially difficult for the Navy to fire him now, even if they think he deserves to be fired.
The more general lesson is that there is tragically too little reward for changing your mind due to social forces that are perfectly rational and robust. The argument that a mind-changer is someone who recognizes his own mistakes and is mature enough to reverse course cannot win over the label that he is a “waffler” or other pejorative. And the force is especially strong when it comes to picking a leader.

Made it to Brooklyn alive. I don’t see what the big deal is, some nice chap shoveled me a spot and even gave me a free chair!
From @TheWordAt.
Speaking of which, have you noticed the similarity between shovel-earned parking dibs and intellectual property law? In both cases the incentive to create value is in-kind: you get monopoly power over your creation. The theory is that you should be rewarded in proportion to the value of the thing you create. It’s impossible to objectively measure that and compensate you with cash so an elegant second-best solution is to just give it to you.
At least in theory. But in both IP and parking dibs there is no way to net out the private benefit you would have earned anyway even in the absence of protection. (Aren’t most people shoveling spaces because otherwise they wouldn’t have any place to put their car in the first instance? Isn’t that already enough incentive?) And all of the social benefits are squandered anyway due to fighting ex post over property rights.

I wonder how many people who save parking spaces with chairs are also software/music pirates?
Finally, here is a free, open-source Industrial Organization textbook (dcd: marciano.) This guy did a lot of digging and we all get to recline in his chair.

It sounds so simple: you’re nice you make the list, you’re naughty you get a stocking full of coal. But just how much of the year do you have to be nice?
It would indeed be simple if Santa could observe perfectly your naughty/nice intentions. Then he could use the grim ledger: you make the list if and only if you are nice all 365 days of the year. But it’s an imperfect world. Even the best intentions go awry. Try as you may to be nice there’s always the chance that you come off looking naughty due to misunderstandings or circumstances beyond your control. Just ask Rod Blagojevich.
And with 365 chances for misunderstanding, the grim ledger makes for a mighty slim list come Christmas Eve. No, in a world of imperfect monitoring, Santa needs a more forgiving test than that. But while it should be forgiving enough to grant entry to the nice, it can’t be so forgiving that it also allows the naughty to pass. And then there’s that dreaded third category of youngster: the game theorist who will try to find just the right mix of naughty and nice to wreak havoc but still make the list. Fortunately for St. Nick, the theory of dynamic moral hazard has it all worked out.
There exists a number T between 0 and 365 (the latter being a “sufficiently large number of periods”) with three key properties
- The probability that a truly nice boy or girl comes out looking nice on at least T days is close to 100%,
- The probability that the unwaveringly naughty gets lucky and comes out looking nice for T days is close to 0%,
- If you are being strategic and you are going to be naughty at least once, then you should go all the way and be unwaveringly naughty.
The formal statement of #3 (which is clearly the crucial property) is the following. You may consider being naughty for Z days and nice for the remaining 365-Z days and if you do your payoff has two parts. First, you get to be naughty for Z days. Second, you have a certain probability of making the list. Property #3 says that the total expected payoff is convex in Z. And with a convex payoff you want to go to extremes, either nice all year long or naughty all year long.
And given #1 and #2, you are better off being nice than naughty. One very important caveat though. It is essential that Santa never let you know how you are doing as the year progresses. Because once you know you’ve achieved your T you are in the clear and you can safely be naughty for the remainder. No wonder he’s so secretive with that list.
(The classic reference is Radner. More recently these ideas are being used in repeated games.)
Moving us one step closer to a centralized interview process (a good thing as I have argued), the Duke department of economics is posting video clips of job talks given by their new PhD candidates. Here is the Duke Economics YouTube Channel, and here is the talk of Eliot Annenberg (former NU undergrad and student of mine btw.) I expect more and more departments to be doing this in the future. (Bearskin bend: Econjeff)
While we are on the subject here is a recent paper that studies the Economics academic labor market (beyond the rookie market.) The abstract:
In this paper we study empirically the labor market of economists. We look at the mobility and promotion patterns of a sample of 1,000 top economists over thirty years and link it to their productivity and other personal characteristics. We find that the probability of promotion and of upward mobility is positively related to past production. However, the sensitivity of promotion and mobility to production diminishes with experience, indicating the presence of a learning process. We also find evidence that economists respond to incentives. They tend to exert more effort at the beginning of their career when dynamic incentives are important. This finding is robust to the introduction of tenure, which has an additional negative ex post impact on production. Our results indicate therefore that both promotions and tenure have an effect on the provision of incentives. Finally, we detect evidence of a sorting process, as the more productive individuals are allocated to the best ranked universities. We provide a very simple theoretical explanation of these results based on Holmström (1982) with heterogeneous firms.
via eric barker.
In sports, high-powered incentives separate the clutch performers from the chokers. At least that’s the usual narrative but can we really measure clutch performance? There’s always a missing counterfactual. We say that he chokes if he doesn’t come through when the stakes are raised. But how do we know that he wouldnt have failed just as miserably under normal circumstances? As long as performance has a random element, pure luck (good or bad) can appear as if it were caused by circumstances.
You could try a controlled experiment, and probably psychologists have. But there is the usual leap of faith required to extrapolate from experimental subjects in artificial environments to professionals trained and selected for high-stakes performance.
Here is a simple quasi-experiment that could be done with readily available data. In basketball when a team accumulates more than 5 fouls, each additional foul sends the opponent to the free-throw line. This is called the “bonus.” In college basketball the bonus has two levels. After fouls 5-10 (correction: fouls 7-9) the penalty is what’s called a “one and one.” One free-throw is awarded, and then a second free-throw is awarded only if the first one is good. After 10 fouls the team enters the “double bonus” where the shooter is awarded two shots no matter what happens on the first. (In the NBA there is no “single bonus,” after 5 fouls the penalty is two shots.)
The “front end” of the one-and-one is a higher stakes shot because the gain from making it is 1+p points where p is the probability of making the second. By contrast the gain from making the first of two free throws is just 1 point. On all other dimensions these are perfectly equivalent scenarios, and it is the most highly controlled scenario in basketball.
The clutch performance hypothesis would imply that success rates on the front end of a one and one are larger than success rates on the first free-throw out of two. The choke-under-pressure hypothesis would imply the opposite. It would be very interesting to see the data.
And if there was a difference, the next thing to do would be to analyze video to look for differences in how players approach these shots. For example I would bet that there is a measurable difference in the time spent preparing for the shot. If so, then in the case of choking the player is “overthinking” and in the clutch case this would provide support for an effort-performance tradeoff.
We dress like students, we dress like housewives
or in a suit and a tie
I changed my hairstyle so many times now
don’t know what I look like!
Life during Wartime, Talking Heads
Mr C. is the new C.E.O. of your firm, Firm C. He was head of operations at one of your competitors Firm A. He was passed over for promotion there and had to exit to get to the C Suite. You wonder about the wisdom of your Board: Why would they choose someone who rejected for the top job by their own company? You subscribe the “Better the Devil you know, than the Devil you don’t” principle. If your firm appoints an internal person to the top job, at least you know their flaws and can adapt to them. This principle also applies at Firm A. So, if they rejected the Devil they know, he must be a really terrible Devil or, to put it in tamer economic terms, a “lemon.”
But you are also aware of the counter argument: Real change can only be achieved by an outsider. Mr C said some smart things in the interview process and so you are happy to give him the benefit of the doubt. You are expecting Mr C. to define a mission for Firm C, a mission that everyone can sign on to. Of course, to persuade everyone to work hard on the vision it has to be a “common value” – something everyone agrees is good – not a “private value” – something only a subgroup agrees is good. In this regard, Mr. C surprises you – he makes a big play that Operations are the most important thing in a successful firm. “Look at H.P. and Amazon,” he says. “They don’t actually make anything, just move stuff around efficiently and/or put in together from parts they buy from other firms. We need innovation in Operations not fundamental innovation in our product line.”
You are shocked. Your firm has R and D Department that has produced amazing, fundamental innovations. Innovative ability is sprinkled liberally throughout your firm in – it is famous for it. It is a core strength of Firm C. Why would anyone want to destroy that and focus on Operations? What should do you do? In times of trouble, you have a bible you turn to – Exit, Voice and Loyalty by Albert Hirshman
Should you give voice to your concerns? The last CEO ignored you and the new CEO might give you more attention so you had thought that you might talk to him. But your first impressions are bad and something you might say might be misinterpreted and lead to the opposite conclusion in the mind of the new CEO. Talking is dangerous anyway. You might be identified as a troublemaker and given lots of terrible work to do. Better to keep quiet and blend in with the crowd.
Is loyalty enough to keep you working hard anyway? Your firm is not a non-profit and, given the CEO plans to quash innovation, it is basically going to produce junk. Why should anyone be loyal to that?
You are drawn inexorably to Hirshman’s last piece of advice: exit. This is hard during the Great Recession – there are few jobs going around. You will be joined by all those who can exit from your sinking ship C so you have to move fast….

To use the justice system most effectively to stop leaks you have to make two decisions.
First, you have to decide what will be a basis for punishment. In the case of a leak you have essentially two signals you could use. You know that classified documents are circulating in public, and you know which parties are publishing the classified documents. The distinctive feature of the crime of leaking is that once the documents have been leaked you already know exactly who will be publishing them: The New York Times and Wikileaks. Regardless of who was the original leaker and how they pulled it off.
That is, the signal that these entities are publishing classified documents is no more informative about the details of the crime than the more basic fact that the documents have been leaked. It provides no additional incentive benefit to use a redundant signal as a basis for punishment.
Next you have to decide who to punish. Part of what matters here is how sensitive that signal is to given actor’s efforts. Now the willingness of Wikileaks and The New York Times to republish sensitive documents certainly provides a motive to leakers and makes leaks more likely. But what also matters is the incentive-bang for your punishment-buck and to deter all possible outlets from mirroring leaks would be extremely costly. (Notwithstanding Joe Lieberman.)
A far more effective strategy is to load incentives on the single agent whose efforts have the largest effect on whether or not a leak occurs: the guy who was supposed to keep them protected in the first place. Because when a leak occurs, in addition to telling you that some unknown and costly to track person spent too much effort trying to steal documents, it tells you that your agent in charge of keeping them secret didn’t spend enough effort doing the job you hired him to do.
You should reserve 100% of your scarce punishment resources where they will do the most good, incentivizing him (or her.)
(Based on a conversation with Sandeep.)
Update: The Australian Government seems to agree. (cossack click: Sandeep)
For 4.6 billion years, the Sun has provided free energy, light, and warmth to Earth, and no one ever realized what a huge moneymaking opportunity is going to waste. Well, at long last, the Sun is finally under new ownership.
Angeles Duran, a woman from the Spanish region of Galicia, is the new proud owner of the Sun. She says she got the idea in September when she read about an American man registering his ownership of the Moon and most of the planets in the Solar System – in other words, all the celestial bodies that don’t actually do anything for us.
Duran, on the other hand, snapped up the solar system’s powerhouse, and all it cost her was a trip down to the local notary public to register her claim. She says that she has every right do this within international law, which only forbids countries from claiming planets or stars, not individuals:
“There was no snag, I backed my claim legally, I am not stupid, I know the law. I did it but anyone else could have done it, it simply occurred to me first.”
She will soon begin charging for use. I advise her to hire a good consultant because pricing The Sun is not your run-of-the-mill profit maximization exercise. First of all, The Sun is a public good. No individual Earthling’s willingness to pay incorporates the total social value created by his purchase. So it’s going to be hard to capitalize on the true market value of your product even if you could get 100% market share.
Even worse, its a non-excludable public good. Which means you have to cope with a massive free-rider problem. As long as one of us pays for it, you turn it on, we all get to use it. So if you just set a price for The Sun, forget about market share, at most your gonna sell to just one of us.
You have to use a more sophisticated mechanism. Essentially you make the people of Earth play a game in which they all pledge individual contributions and you commit not to turn on The Sun unless the total pledge exceeds some minimum level. You are trying to make each individual feel as if his pledge has a chance of being pivotal: if he doesn’t contribute today then The Sun doesn’t rise tomorrow.
A mechanism like that will do better than just hanging a simple price tag on The Sun but don’t expect a windfall even from the best possible mechanism. Mailath and Postlewaite showed, essentially, that the maximum per-capita revenue you can earn from selling The Sun converges to zero as the population increases due to the ever-worsening free-rider problem.
You might want to start looking around for other planets in need of a yellow dwarf and try to generate a little more competition.
(Actual research comment: Mailath and Postlewaite consider efficient public good provision. I am not aware of any characterization of the profit-maximizing mechanism for a fixed population size and zero marginal production cost.)
[drawing: Move Mountains from http://www.f1me.net]
Every year my employer schedules two weeks for Open Enrollment in the benefits plan. This is the time period where you can freely change health plans, life insurance, etc. In the weeks before Open Enrollment we receive numerous emails reminding us that it is coming up. During the Open Enrollment period we receive numerous emails reminding us of the deadline. The day of the deadline we receive a final email saying that today is the deadline.
Then, every year after the deadline passes the deadline is extended an additional week, and the many people who procrastinated and missed the first deadline are given a reprieve. This happens so consistently that many people know it is coming and understand that the “real” deadline is the second one. So many that it is reasonable to assume that a sizeable number of these people procrastinate up to the second deadline and miss that one too. But there is never a third deadline.
You notice this kind of thing happening a lot. Artificial deadlines that you can be forgiven for missing but only once. It’s a puzzle because whatever causes the first deadline to be extended should eventually cause the second deadline to be extended once everyone figures out that the second deadline is the real one. For example if the reason for the first deadline extension is that year after year many people miss the first deadline and flood the Benefits office with requests, then you would expect that this will eventually happen with the second deadline.
The deadline-setters must feel on firmer ground denying a second request by saying “We warned you about the first deadline, then we were so nice and gave you an extension and you still missed that one.” But if a huge number of people missed the second deadline, no doubt they could still mount enough pressure for a second extension. Indeed the original speech of “We sent you emails every day warning you about the deadline and you still missed it” didn’t stem that tide.
Is it just that every nth deadline is a new coordination game among all the employees and the equilibrium number of deadline extensions is simply the first one in which the “no extension request” equilibrium is selected? I think there is more structure than that because you rarely see more than just one, and you almost never see zero.
I think there is scope for some original theory here and it could be very interesting. What’s your theory?
As junior recruiting approaches, we cannot help but speculate on the optimal way to compare apples to oranges – candidates across different fields (e.g. micro vs macro) and across universities. I speculated a while ago that a “best athlete” recruiting system across fields is prone to gaming. Each field might simply claim its candidate is great. To stop that happening, you might have to live with having slots allocated to fields and/or rotating slots over time.
It turns out that Yeon-Koo Che, Wouter Dessein and Navin Kartik have thought about something much more subtle along these lines in their paper “Pandering to Persuade“. They consider both comparisons across fields and across candidates from different universities. I’m going to give a rough synopsis of the paper.
Suppose the recruiting committee in an economics department is deciding whether to hire a theorist or a labor economist. There is only one labor economist candidate and her quality is known. There are two theorists, one from University A and one from University B. The recruiting committee would like to hire a theorist if and only if his quality is higher than the labor economist’s. Also, the recruiting committee and everyone else believes that, on average, candidates from University A are better than those from University B. But of course this is only true on average. Luckily some theorists can read the paper and help fine tune the committee’s assessment of the theory candidates. They share the committee’s interest in hiring the best theorist but they are quite shallow and hence uninterested in research outside their own field. In particular, theorists do not care for labor economics and always prefer a theorist at the end of the day.
So, the recruiting committee must listen to the theorists’ recommendation with care. First, the theorists have huge incentives to exaggerate the quality of their favored candidate if this carries influence with the committee. Hence, quality evaluations cannot be trusted. All the theorists can credibly do is say which candidate is better but not by how much. But there is a further problem: if the theorists say candidate B is better, given the committee’s prior, they might think better of candidate B and yet prefer to hire the labor economist! Being theorists, the sender(s) can do backward induction and they know the difficulty with their strategy if it is too honest. The solution is obvious to the theorists: extol the virtues of candidate A even when candidate B is a little better. Hence, in equilibrium, the candidate from the ex ante better university gets favored. But candidate B still has a shot: if they are sufficiently good, the theorists still recommend them. The committee may with some probability still go with the labor economist so it is risky to make this recommendation. But if candidate B is sufficiently good, the theorists may want to run this risk rather than push the favored candidate A. I refer you to the paper for the full equilibrium(a) but, as you can see, the paper is fun and interesting.
There are some extensions considered. In one, the authors study delegation to the theorists. Sometimes the department will lose out on a good labor economist but at least there is no incentive for the theorists to select the worst candidate. This is the giving slots to fields solution I wondered about and it is derived in this elegant model.
We have a new guest-blogger: Roger Myerson.
Roger is a game theorist but his work is known to everyone – theorist or otherwise – who has done graduate work in economics. If an economist from the late nineteenth century, like Edgeworth, or early twentieth century, like Marshall, wakes up and asks, “What’s new in economics since my time?”, I guess one answer is, “Information Economics”.
Is the investment bank trying to sell me a security that it is trying to dump or is it a good investment? Is a bank’s employee screening borrowers carefully before he makes mortgage loans? Does the insurance company have enough reserves to cover its policies if many of them go bad at the same time? All these topical situations are characterized by asymmetric information: One party knows some information or is taking an action that is not observable to a trading partner.
While the classical economists certainly discussed information, they did not think about it systematically. At the very least, we have to get into the nitty-gritty of how an economic agent’s allocation varies with his actions and his information to study the impact of asymmetric information. And perfect competition with its focus on prices and quantities is not a natural paradigm for studying these kind of issues. But if we open the Pandora’s Box of production and exchange to study allocation systems broader than perfect competition, how are we even going to be able to sort through the infinite possibilities that appear? And how are we going to determine the best way to deal with the constraints imposed by asymmetric information?
These questions were answered by the field of mechanism design to which Roger Myerson made major contributions. If an allocation of resources is achievable by any allocation system (or mechanism), then it can be achieved by a “direct revelation game” DRG where agents are given the incentive to report their information honestly, told actions to take and then given the incentives to follow orders. To get an agent to tell you his information, you may have to pay him “information rent”. To get an agent to take an action, you may have to pay him a kind of bonus for performing well, “an efficiency wage”. But these payments are unavoidable – if you have to pay them in a DRG, you have to pay them (or more!) in any other mechanism. All this is quite abstract, but it has practical applications. Roger used these techniques to show that the kind of simple auctions we see in the real world in fact maximize expected profits for the seller in certain circumstances, even though they leave information rents to the winner. These rents must be paid in a DRG and hence if an auction leaves exactly these rents to the buyers, the seller cannot do any better.
For this work and more, he won the Nobel Memorial Prize in Economics in 2007 with Leo Hurwicz and Eric Maskin. Recently, Jeff mentioned that Roger and Mark Satterthwaite should get a second Nobel for the Myerson-Satterthwaite Theorem which identifies environments where it is impossible to achieve efficient allocations because agents have to be paid information rents to reveal their information honestly. This work also uses the framework and DRG I have described above.
Over time, Roger has become more of an “applied theorist”. That is a fuzzy term that means different things to different people. To me, it means that a researcher begins by looking at an issue in the world and writes down a model to understand it and say something interesting about it. Roger now thinks about how to build a system of government from scratch or about the causes of the financial crisis. How do we make sure leaders and their henchmen behave themselves and don’t try to extract more than minimum rents? How can incentives of investment advisors generate credit cycles?
These questions are important and obviously motivated by political and economic events. The first question belongs to “political economy” and hints at Roger’s interests in political science. More broadly, Roger is now interested in all sorts of policy questions, in economics and domestic and foreign policy.
Jeff and I are very happy to have him as a guest blogger. We hope he finds it easy and fun and the blog provides him with a path to get his analyses and opinions into the public domain. We hope he becomes a permanent member of the blog. So, if among the posts about masturbation and Charlie Sheen’s marital problems you find a post about “What should be done in Afghanistan”, you’ll know who wrote it.
Welcome Roger!







