You are currently browsing the tag archive for the ‘incentives’ tag.
Hertz is making an offer for Dollar-Thrifty. Consolidation of this sort helps all players in the industry by reducing capacity and allowing all firms, including those outside the merger, to raise prices. (I already talked about this in a post about the United-Continental-US Airways merger dance.) There is an incentive then to stay outside the merger and gain from it. There has to be a countervailing force to overcome the positive externality of a merger. In the rental car case, it seems Dollar has access to a leisure-traveller market that Hertz would like to get their hands on. And there is an interesting twist to the merger deal they signed with Dollar. The Avis CEO would like to bid for Dollar (or so he says) and writes to Dollar:
“[W]e are astonished that.. you have compounded these shortcomings by agreeing to aggressive lock-up provisions, such as unlimited recurring matching rights plus an unusually high break-up fee (more than 5.25% of the true transaction value, as described by your own financial advisor), as a deterrent to competing bids that could only serve to increase the value being offered to your shareholders.”
Hertz has built in a nifty-seeming “match the competition” clause into its agreement with Dollar, If other bidders emerge, Hertz gets to match their bids and there is a break-up fee that deters Dollar from accepting another suitor.
There are several strategic effects. If Avis truly wants the Dollar leisure market access, this clause clearly makes it hard for them to acquire it. But it leaves Hertz vulnerable to a spoiling strategy by Avis: Avis can start bidding up the price Hertz pays for Dollar by make high bids for Dollar. Avis won’t win Dollar but will leave Hertz stuck with a big payment.
Spoiling may backfire if its triggers a future price war if Hertz is forced to take a short-run perspective and slash prices to survive . We will see what happens in the next few days.
A water pipe to the Greater Boston area has broken. Two million residents have to boil water before they drink it. We were moving apartments so we were a bit slow off the mark. By the time I got to Walgreens this morning, all the water was sold out. Even the San Pellegrino at Whole Foods was gone. The water shortage has all the features of a classic bank run.
Of course everyone needs more bottled water than they usually buy. Who knows when the pipe will be fixed? So, everyone buys extra water for insurance. But then, this increases an individual’s incentive to buy lots of water yet further as there is greater risk of having no water. This is like a classic bank run: the more others’ withdraw money, the more I withdraw money as there may be nothing left for me to withdraw later. Lo and behold bottled water is all gone within hours, just like all the deposits in bank facing a run.
Luckily, there was no beer run. So, I’m safe.
You are an ambitious, young Presidential-wannabe. This makes you a trifle immodest and you decide to write an autobiography, Volume 1. It’s going to set the stage for your Presidential bid. Some may say you have yet to do anything so said volume may not sell too well, even though you have an exotic cocktail of a family background and were President of Harvard Law Review. They may be right so you are not willing to pay a lump sum fee to employ an agent to sell your manuscript: not only might your book not work out, you would be stuck with a bill from an agent to add to your law school debts.
Luckily for you, pretty much every guy who writes a book is in the same position as you: immodest enough to write a book and yet knowing that it might not sell. So, there is a standard contract that is signed with a book agent: they work for you to get you a contract and if the book actually sells they get 15%. This way you share the risk: if the book fails, at least you do not also lose the amount you paid the agent; in return, if it succeeds, you do not get to keep all the benefits. The 15% contract gives you a form of insurance. Plus, it gives the agent the incentive to work hard, helping to alleviate the moral hazard problem.
Miracle of miracles, the book does actually sell eventually. It lies ignored but you become kind of famous anyway and then people buy it. Now you’re ready for Volume 2. Is the old book agent contract still the best option for you?
Well, Volume 2 is almost certainly going to fly off the shelves. You do not need to share the risk. All you care about is the getting the best price and you don’t need protection in case of failure as it ain’t going to fail. Best just to go with a great negotiator. In fact, a well-connected Washington lawyer might be just the thing. You just pay him upfront and he calls his contacts. And he’s done it before. It’s expensive if your book fails and you don’t get the rest of your advance or even have to give back the chunk they gave you. But Volume 2 is your road to the Presidency, Volume 1 was just laying the foundation. Everyone will read it as you’re intriguing and you’ll get to keep your advance and even get royalties. Now, you can afford to be President as your law school debts are paid and you can even send your kids to a spiffy private school.
She, like many artists, doesn’t want to raise the price of her concert tickets even though there is excess demand. By keeping the price low she allows fans who could not afford the market clearing price to see her concerts. She is effectively paying to allow them to enjoy her shows. Does this make her an altruist?
A textbook argument against, but one that is wrong, is the following. At the low price there is a market for ticket scalpers. Ticket scalpers will raise the price to the market-clearing level. Those fans who would sell their tickets to scalpers reveal that they prefer the money to the tickets. And they get the money in exchange for the tickets. Likewise those that buy tickets from scalpers reveal that they value the tickets more than the money. So the secondary market makes everyone better off. So if Miley Cyrus were truly an altruist she would allow this to happen rather than paying a price to prevent it.
The problem with the argument is that it works only because the ticket scalper was unanticipated. If all parties knew that tickets would sell at the market clearing price then the “true fans” that Miley is targeting would never actually get a ticket in the first place and this would make them worse off. They would never get a ticket either because they couldn’t afford it, or if they were originally allocated by lottery, the additional rents would attract more entrants to that lottery.
So we can’t argue that Miley is not an altruist. But we can argue that Miley’s refusal to raise prices is perfectly consistent with profit maximization. Here is a model. A fan’s willingness to pay to see Miley Cyrus in concert is a function of who else is there. It’s more fun if she is singing to screaming pre-teen girls because they add to the experience. It’s no fun if she is singing to a bunch of rich parents and their kids who don’t know how to cut loose.
With this model, no matter how much Miley would like to raise the price to take advantage of excess demand, she cannot. Because the price acts as a screening instrument. Higher prices select a less-desirable composition of the audience, lowering willingness to pay. The profit maximizing price is the maximum she can charge before this selection effect starts to reduce demand. At that price and everywhere below there is excess demand.
This is related to a paper by Simon Board on monopolistic pricing with peer effects.
We noticed that professional golfers today have ads on their hats, sleeves, collars, belt-buckles, shoes, etc. while in the past few had more than one or two ads. At an individual level this makes sense but collectively it shows that the PGA would do better to centralize their negotiations with advertisers.
When Phil Mickleson considers selling another ad he has to lower his price. He trades off the additional sale versus the reduction in the price to decide whether it is worth it. He doesn’t take into account how his increased supply lowers the price of ads for all PGA golfers. When this negative externality is not internalized, the PGA as a whole sells too many ads. PGA-wide ad revenue would increase if they could negotiate ads as a group rather than individually.
Why don’t they? In the short-term it would be simple. Each golfer reports the ad revenue he is currently earning. Then an agent for the PGA negotiates with advertisers to sell a block of ads and distributes them optimally across golfers. This optimization would not only involve keeping quantity low but it would also take into account complementarity between golfer and ad, screen time, diversification, etc. Then, the total ad revenue would be shared among the players in some way that gives each player at least as much as he was earning individually. Since total revenue would be higher, there would be money left over to divide up in some way.
The problem is how to manage this over time. In order to keep a majority of players willing to go along with it, they will have to be promised at least as much as their autarky value. But the most recent public information about that value was recorded just before they entered the cooperative agreement. Over time that information depreciates as players rise and fall and new players arrive.
But privately, each individual player would be able to estimate their ad revenues should he go it alone. When the players bargain over shares, each individual player will exaggerate his earnings potential and insist on compensation for his outside option. When public information is weak enough, these demands can add up to more than the group can earn, at which point bargaining breaks down and autarky prevails.
Obama’s Nuclear Posture Review has been revealed. The main changes:
(1) We promise not to use nuclear weapons on nations that are in conflict with the U.S. even if they use biological and chemical weapons against us;
(2) Nuclear response is on the table against countries that are nuclear, in violation of the N.P.T., or are trying to acquire nuclear weapons.
This is an attempt to use a carrot and stick strategy to incentivize countries not to pursue nuclear weapons. But is it any different from the old strategy of “ambiguity” where all options are left on the table and nothing is clarified? Elementary game theory suggests the answer is “No”.
First, the Nuclear Posture Review is “Cheap Talk”, the game theoretic interpretation of the name of our blog. We can always ignore the stated policy, go nuclear on nuclear states or non-nuclear on nuclear states – whatever is optimal at the time of decision. Plenty of people within the government and outside it are going to push the optimal policy so it’s going to be hard to resist it. Then, the words of the review are just that – words. Contracts we write for private exchange are enforced by the legal system. For example a carrot and stick contract between an employer and employee, rewarding the employee for high output and punishing him for low output, cannot be violated without legal consequences. But there is no world government to enforce the Nuclear Posture Review so it is Cheap Talk.
If our targets know our preferences, they can forecast our actions whatever we say or do not say, so-called backward induction. So, there is no difference between the ambiguous regime and the clear regime.
What if our targets do not know our preferences? Do they learn anything about our preferences by the posture we have adopted? Perhaps they learn we are “nice guys”? But even bad guys have an incentive to pretend they are nice guys before they get you. Hitler hid his ambitions behind the facade of friendliness while he advanced his agenda. So, whether you are a good guy or bad guy, you are going to send the same message, the message that minimizes the probability that your opponent is aggressive. This is a more sophisticated version of backward induction. So, your target is not going to believe your silver-tongued oratory.
We are left with the conclusion that a game theoretic analysis of the Nuclear Posture Review says it seems little different from the old policy of ambiguity.
With the help of DressRegistry.com:
Our goal is to lessen the chance that someone attending the same event as you will be wearing the EXACT same dress. We also hope we can be a resource for groups planning events through our message board and marketing partners. While it’s true we can not guarantee that someone else won’t appear in the same dress as you, the more that you (and others like you) use DressRegistry.com the lower that likelihood will be. So please use our site and have fun!
You find your event on their site and post a description and picture of the dress you will be wearing. When other guests check in to the site, they will know which dresses to avoid, in order to prevent dress disasters such as this one (Pink and Shakira, featured on the site):

The site promises “No personal information is displayed” but I wonder if anonymity is a desirable feature in this kind of mechanism. It seems to open the door to all kinds of manipulation:
- Chicken. Suppose you have your heart set on the Cache: green, ankle, strapless (picture here) but you discover that it has already been claimed for the North Carolina Museum of Art Opening Gala. You could put in a second claim for the same dress. You are playing Chicken and you hope your rival will back down. Anonymity means that if she doesn’t and the dress disaster happens, your safe because there’s only she-said she-said. Worried she might not back down? Register it 10 times.
- Hoarding. Not sure yet which dress is going to suit you on that day? Register everything that tickles your fancy, and decide later!
- Cornering the Market. You don’t just want to avoid dress disasters, you want to be the only one wearing your favorite color or your favorite designer or… Register away all the competition.
- Intimidation. Someone has already registered a knock-out dress that’s out of your price range. Register it again. She might think twice before wearing it.
What would happen if the individual mandate were removed from the health care bill? Republicans are proposing to do that but leave intact the rules on pre-existing conditions. This sounds like disaster because then the equilibrium is for only the already-sick to have “insurance,” meaning premiums are very high, meaning that the healthy prefer not to buy insurance until they are already sick.
This is not a problem of “skyrocketing costs” as some characterize it. If the same number of people get sick, then costs are the same. Its the premiums that skyrocket. The problem with that is that health care insurance is no longer insurance.
But the individual mandate is not the only way to bring the insurance back into health insurance. (And it appears that the penalties are so low that we are headed for this equilbirium anyway. See this article on MR. ) Many employer-based health insurance providers use a system of “open enrollment.” You can sign on to the plan when you join, but if you don’t and then decide later you want to , you must wait until a specific narrow window of time.
I don’t know what the intended purpose of open enrollment is but one effect it has is to give incentives to buy insurance before you get sick. A system like this would work just fine in place of the individual mandate.
Even better: when you turn 21 you are able to buy insurance from any provider regardless of your pre-existing conditions. This right continues as long as you have had insurance continuously. If you chose not to buy insurance in the past (and you could have afforded it) and you wish to buy it now then you cannot be denied coverage due to pre-existing condition. However, insurance companies are not required to offer you the same policy as the main pool.
Update: Austin Frakt argues that the penalties are already high enough to avoid the bad equilibrium.
Last week, I was in line at the front desk of a condo hotel in Naples, Florida at around 9 pm. My electronic key had discharged and I needed a replacement, i.e. I already had a room.
Unlike me, the guy ahead of me was looking to rent a two bedroom but the clerk said they were all full but she could offer him a couple of one bedrooms. She has three left. The guy asked her the rate and she quoted him $269/room. He said that was too much and she asked him how much he was comfortable with paying. My guess is that as it was pretty late, it was unlikely that the rooms would be used that night so the clerk was willing to negotiate. The guy said he was willing to pay at most $200/room. The clerk said she had to ask her manager and disappeared into the back room. She came back with an offer of $239 and the guy said that was too much. The clerk was unwilling to haggle further and the guy left.
All I wanted was a new key. I was itching for the guy to leave so I could go to bed and ended up focusing on the discussion as I was hoping it would end quickly. For an economist, it was pretty fun.
First, who knew you could haggle for hotel room prices this way? A sign of the recession perhaps. Second, the “let me take your offer to my manager”, just hearkens to haggling for cars so there is a nice symmetry with that subculture of bargaining.
Finally, we see how delegation can help in certain situations. Normally, when an agent works for a principal, the principal tries to align incentives so the agent works hard on her behalf. This results in the optimality of bonuses, commissions and the like where the agent shares the profits of hard work. But sometimes it is good to commit to turn down business.
A firm with monopoly power wants to maintain a high price. Once it has made a take or leave it offer to a buyer, if the buyer rejects the offer, the firm has the incentive to cut the price to get business. Knowing this will happen, high value buyers will reject the initial offer and wait for the lower price. The firm’s market power diminishes as a result of its inability to commit not to lower prices. This is a hugely simplified version of the Coase conjecture.
But, if instead of a firm/hotel, a manager/clerk makes the offer, there is potentially a different conclusion. If the clerk does not see a share of the profits generated by the extra sale, the clerk has no incentive to cut the price. This results in some business being turned away but allows the hotel to maintain some market power. I guess something like this happened in the hotel bargaining I observed. (Perhaps the clerk is on commission to make a sale and the manager in the back room makes sure the rooms are not then just given way to get the commission?). If the clerk had the same incentives as the hotel owner, it would be bad for profits as commitment power would evaporate. Mis-aligning incentives makes more sense.
Or there is a small chance that a person observing the conversation reports it on his blog. The hotel’s reputation for maintaining high prices goes up in smoke and future sales are made at low prices. Knowing this, the hotel refuses to accept low offers and keeps its reputation intact.
You are having dinner with your child in a restaurant. He has ordered chicken tenders with fries and you force him to have a small salad before the main course arrives. In a “When Harry met Sally” moment, you ask for the fries to be brought “on the side”, i.e. on another plate.
He has a small amount of the chicken and you give him a few fries as a reward. He then claims that he is full. Is he really?
There are two states of the world, full F and hungry H. The state is known to the agent/child but the principal/parent does not know the state. The agent has private information. How does the principal work out the true state? Offer the agent another french fry. If it is accepted, the true state is H – he is truly hungry and only pretending to be full. If he refuses, it is F and the chips he had earlier filled him up. Of course you have to know your kids to determine which food product separates or screens the two states.
The first few times you try this trick, you can go a bit further. Once he has accepted the fry, you point out he must really be in state H and make him have some more chicken. In the long run, he will work out that accepting the fry leads to more chicken. He will refuse the fry and you’ll never work out if the true state is F or H. Your solution depends on bounded rationality and if learning helps to eliminate it, you are powerless in the long run. Also, if you choose the wrong food group you won’t be able to screen the two states in the first place. In our case, ice cream is always acceptable in all states while more french fries are acceptable if and only if the true state is H.

Because he is 17 year old Russian high-school student Andrey Ternovskiy. He’s the guy who created Chatroulette by himself, on a whim, in 3 months, and is now in the US fielding offers, meeting with investors, and considering never again returning to Moscow.
Should he sell? Would he sell? To frame these questions it is good to start by taking stock of the assets. He has his skills as a programmer, the codebase he has developed so far, and the domain name Chatroulette.com which is presently a meeting place for 30 million users with an additional 1 million new users per day. His skills, however formidable, are perfectly substitutable; and the codebase is trivially reproduceable. We can therefore consider the firm to be essentially equal to its unique exclusive asset: the domain name.
Who should own this asset? Who can make the most out of it? In a perfect world these would be distinct questions. Certainly there is some agent, call him G, which could do more with Chatroulete.com than Andrey, but in a perfect world, Andrey keeps ownership of the firm and just hires that person and his competitive wage.
But among the world’s many imperfections, the one that gets in the way here is the imperfection of contracting. How does Andrey specify G’s compensation? Since only G knows the best way to build on the asset, Andrey can’t simply write down a job description and pay G a wage. He’d have to ask G what that job description should be. And that means that a fixed wage won’t do. The only way to get G to do that special thing that will make Chatroulette the best it can be is to give G a share of the profits.
If Andrey is going to share ownership with G, who should have the largest stake? Whoever has a controlling stake in the firm will be the other’s employer. So, should G employ Andrey (as the chief programmer) or the other way around? Andrey’s job description is simple to write down in a contract. Whatever G says Chatroullete should do, Andrey programs that. Unlike when Andrey employs G, G doesn’t have to know how to program, he just has to know what the final product should do. And if Andrey can’t do it, G can just fire him and find someone who can.
So Andrey doesn’t need any stake in the profits to be incentivized to do his job, but G does. So G should own the firm completely and Andrey should be its employee. The asset is worth more with this ownership structure in place, so Andrey will be able to sell for a higher price than he could expect to earn if he were to keep it.
Now that Roger Myerson is one. Today at Northwestern he presented his new work on A Moral Hazard Model of Credit Cycles. It attracted a huge crowd, not surprisingly, and introduced a whole new class of economists to the joy and sweat of a Roger Myerson lecture.
(Roger apparently hasn’t read my advice for giving talks.) Listening to Roger speak is not only thoroughly enlightening and entertaining, its calisthenics for the mind. I once brought a pen and pad to one of his talks and outlined his nested digressions. It is absolutely a thing of beauty when every step down the indentation ladder is paired with a matching step on the way back up. When he finally returns to the original stepping off point, no threads are left hanging.
Keeping track of all this in your head and still following the thread of the talk is a bit like Lucy and Ethel wrapping candy.
Still, I think I got the basic point. Roger has a model of credit cycles that falls out naturally from a well-known feature of dynamic moral hazard. In his model, banks are intermediaries between investors and entreprenuers and they are incentivized via huge bonuses to invest efficiently. These bonuses are paid only when the bankers retire with a record of success.
These backloaded incentives mean that bankers are trusted with bigger funds the closer they are to retirement. That’s when the coming payout looms largest, deterring bankers from diverting the larger sums for their own benefit. Credit cycles are an immediate result. Because bankers handle larger sums near their retirement than those just starting out, their retirement means that total investment must go down. So the business cycle tracks the age demographics of the banking sector.
(It’s the Cocoon theory of business cycles, because if you could extend the lives of bankers you would enhance the power of incentives, lowering the moral hazard rents and increasing investment.)

Presh Talwalker reports:
After a late night out, I found myself at the only eatery still open in the suburbs, the late night haven that is Denny’s. When paying for the meal, I noticed a curious offer on the receipt that read something like:
If your receipt does not list a food or drink you ordered, let us know and you will get the item free plus a $5 gift certificate.
Which, as Presh deduced, is a counter-bribe from Denny’s management so you will rat out your server if he or she bribes you with free food in return for a tip.
Kjerstin Erickson is selling a 6% stake in her lifetime income for $600,000 through a vehicle known as the Thrust Fund:
Erickson’s Thrust Fund comes at a time of deep experimentation in early-stage financing across the technology and media industries. The transparency afforded by social networking is making it easier for investors to vet people’s reputations and hold them accountable. At the same time, the initial amount of capital needed to build, market and distribute a product or service has fallen, undermining the venture capital model and making angel investors relatively more powerful.
Think of Kjerstin as a self-managed firm. She could issue debt or equity. The Modigliani-Miller theorem explains why most people in Kjerstin’s position choose to issue debt. Her income is taxed, but interest on debt is often tax-deductible.
But a key difference between Kjerstin and a firm is that you if you acquire Kjerstin you cannot fire the manager. So your capital structure is also your managerial incentive scheme. Debt makes Kjerstin a risk-lover: she gets all the upside after paying off her debts and her downside is limited because she can just default. With equity she owns 94% of her earnings no matter what they are.
So many questions come up, here are just a few.
- Why don’t we replace student loans with student shares? Arguably the reason we stick with debt is that it is good policy to induce risk-taking. Because the large numbers means that aggregate risk is small and society benefits more from the big hits than it loses from the misses.
- Do Kjerstin’s investors get voting rights?
- Does the contract give her the freedom to issue more shares in the future? She wants this option but her investors don’t. The more shares she sells the less incentive she has to work hard.
- Kjerstin now has a huge incentive to take in-kind compensation that is hard to value. In corporate finance, this is called diverting the cash flows. How does her contract deal with that?
(Lid lift: The Morning News.)
Chat Roulette (NSFA) is a textbook random search and matching process. Except that it is missing a key ingredient: an instrument for screening and signaling. That, coupled with free entry, means that everyone’s payoff is driven to zero.
In practice the big problems with Chat Roulette are
- Too many lemons
- Too much searching
- The incentive do something attention grabbing in the first few seconds is too strong
On the other, hand I expect the next generation of this kind of service to be a tremendous money maker. Here are some ideas to improve on it. The general idea is to create a mechanism where better partners are able to more easily find other good partners.
- Users maintain a score equal to the average length of their past chats. The idea is to give incentives to invest more in each chat, and to reward people who can keep their partners’ attention for longer. A user with a score of x is given the ability to restrict his matches to other users with a score greater than any z≤x he specifies. This is probably prone to manipulation by users who just keep their chats open inviting their partners to do the same and pad their numbers.
- Within the first few seconds of a match, each partner bids an amount of time they would like to commit to the current match. The system keeps the chat open for the smaller of the two numbers. Users maintain a score equal to the average amount of time other users have bid for them. Scores are used to restrict future matching partners just as above.
- Match users in groups of 10 instead of 2. Each member of the group clicks on one of the others and any mutually-clicking pair joins a chat. This could be coupled with a system like #1 above to mitigate the manipulation problem. Or your score could be the frequency with which others click on you.
- A simple “like/don’t like” rating system at the end of each chat. In order to make this incentive-compatible, you have an increased chance of meeting the same person again in future matches if both of you like each other. On top of that, your score is equal to the number of times people like you.
- Same as 4, but your score is computed using ranking algorithms like Google’s PageRank where it’s worth more to be liked by a well-liked partner.
- Multiple channels with their own independent scores. You could imagine that systems like the above would have multiple equilibria where the tastes of users with the highest scores dominate, thus reinforcing their high scores. Multiple channels would allow diversity by supporting different equilibria.
- Allow users to indicate gender preference of their matches. To avoid manipulation, your partners report your gender to the system.
These are all screening mechanisms: you earn control over whom you match with. But the system also needs a signaling mechanism: a way for a brand new user to signal to established users that she is worth matching with. The problem is that a good signal requires a commitment to lose reputation if you don’t measure up. But without a way to stop users from just creating new identities, these penalties have no force.
This is a super-interesting design problem and someone who comes up with a good one is going to get rich. (NB: Sandeep’s and my consulting fees remain quite modest.)
Greg Mankiw often says this:
A tax on height follows inexorably from the standard utilitarian approach to the optimal design of tax policy coupled with a well-established empirical regularity.
Becuase this is part of his argument against income redistribution. As I have said before (and see a nice comment there by Ilya) this is based on a misunderstanding of the theory of taxation. It does not matter what the government’s underlying objective is, whether it is utilitarian or anything else. If the government wants to raise money, for whatever purpose, say to provide education or pay the President’s economic advisors or fight wars, it wants to do so in the least distortionary way.
Minimizing the distortions means making use of instruments that are correlated with ability to pay but are exogenous, i.e. unaffected by tax policy. As Mankiw points out (the “well-established empirical regularity”), height is correlated with ability to pay and clearly the tax code does not affect how tall you are. So by conditioning your tax payments (at least partially) on your height, the government can raise the same amount of revenue as a given pure income tax with less distortionary effects on your labor supply.
It has nothing to do with utilitarianism. (And your natural objection to taxing height therefore says nothing about your attitudes toward income redistribution.)
We tend to think of intellectual property law as targeted mostly at big ideas with big market value. But for every big idea there are zillions of little ideas whose value adds up to more. Little ideas are little because they are either self-contained and make marginal contributions or they are small steppingstones, to be combined with other little ideas, which eventually are worth a lot.
It’s now cheap to spread little ideas. Whereas before even very small communication costs made most of them prohibitively expensive to share. In some cases this is good, but in some cases it can be bad.
When it comes to the nuts and bolts kinds of ideas, like say how to use perl to collect data on the most popular twitter clients, ease of dissemination is good and intellectual property is bad. IP protection would mean that the suppliers of these ideas would withold lots of them in order to profit from the remainder. Without IP protection there is no economic incentive to keep them to yourself and the infinitessimal cost of sharing them is swamped by even the tiniest pride/warm glow motives.
Now the usual argument in favor of IP protection is that it provides an economic incentive for generating these ideas. But we are talking about ideas that don’t come from research in the active sense of that word. They are the byproduct of doing work. When its cheap to share these ideas, IP protection gets in the way.
The exact same argument applies to many medium-sized ideas as well. And music.
But there are ideas that are pure ideas. They have no value whatsoever except as ideas. For example, a story. Or basic research. The value of a pure idea is that it can change minds. Ideas are most effective at changing minds when they arrive with a splash and generate coordinated attention. If some semblance of the idea existed in print already, then even a very good elaboration will not make a splash. “That’s been said/done before.”
Its too easy now to spread 1/nth-baked little ideas. Before, when communication costs were high it took investment in polishing and marketing to bring the idea to light. So ideas arrived slowly enough for coordinated attention, and big enough to attract it. Now, there will soon be no new ideas.
Blogs will interfere with basic research, especially in the social sciences.
When it comes to ideas, here’s one way to think about IP and incentives to innovate. It’s true that any single individual needs extra incentive to spend his time actively trying to figure something out. That’s hard and it takes time. But, given the number of people in the world, 99.999% of the ideas that would be generated by active research would almost certainly just passively occur to at least one individual.
India has proposed a new round of talks with Pakistan. The last meaningful talks in 2007 led to a thawing of relations and real progress till everything was brought to a grinding halt by the terrorist attacks in Mumbai.
What are the payoffs and incentives for the two countries? David Ignatius ar the Washington Post offers this analysis:
“The India-Pakistan standoff is like one of those game-theory puzzles where both nations would be better off if they could overcome suspicions and cooperate — in this case, by helping the United States to stabilize the tinderbox of Afghanistan. If Indian leaders meet this challenge, they could open a new era in South Asia; if not, they may watch Pakistan and Afghanistan sink deeper into chaos, and pay the price later.”
The quote offers a theory for how India might gain from peace but what about Pakistan? Pakistan cannot be treated as a unitary actor. Some part of the elite and perhaps even the general population may gain from an easing of tension and a permanent peace with India. But the Pakistani military has quite different interests. The military dominate Pakistan politically and economically. Their rationale for resources, power and prestige relies on perpetual war not perpetual peace. Sabotage is a better strategy for them than cooperation with India. The underlying game is not the Prisoner’s Dilemma.
Military payoffs have to be aligned with economic payoffs to encourage cooperation. Economic growth can also generate the surplus to bankroll a bigger army. A poor country needs the threat of war to divert valuable resources into defense. But a rich country does not.
Eddie Dekel points out the following puzzling fact. At the gym most people wipe down the exercise machines and benches after they use them and not before. There are a few obvious social benefits of this policy. For one, you know better than your successor where the towel is most advantageously deployed. Also, the sooner that stuff is removed, the better.
But still it’s a puzzle from the point of view of dynamic efficiency. With this system everyone mops once. But there exists a welfare improving re-allocation where one guy doesn’t mop and after him everyone mops before using the machine. Nobody’s worse off and that one guy is better off. A Pareto improvement.
In fact the ex-post-mop regime is especially unstable because that one guy has a private incentive to trigger the re-allocation. He’s the one who saves effort. So from an abstract point of view this is indeed a puzzle. Moreover, there is this Seinfeldian insight that complicates things even further.
ELAINE: Never mind that, look at the signal I just got.
GEORGE: Signal? What signal?
ELAINE: Lookit. He knew I was gonna use the machine next, he didn’t wipe his sweat off. That’s a gesture of intimacy.
GEORGE: I’ll tell you what that is – that’s a violation of club rules. Now I got him! And you’re my witness!
ELAINE: Listen, George! Listen! He knew what he was doing, this was a signal.
GEORGE: A guy leaves a puddle of sweat, that’s a signal?
ELAINE: Yeah! It’s a social thing.
GEORGE: What if he left you a used Kleenex, what’s that, a valentine?
(conversations with Asher, Ron, Juuso and Eddie. I take all the blame.)
Its obvious right? Ok but before you read on, say the answer to yourself.

Is it because he is the most able to make up any lost time by the earlier teammates? Because in the anchor leg you know exactly what needs to be done? Now what about this argument: The total time is just the sum of the individual times. So it doesn’t matter what order they swim in.
That would be true if everyone was swimming (running, potato-sacking, etc.) as fast as they could. But it is universally accepted strategy to put the fastest last. If you advocate this strategy you are assuming that not everyone is swimming as fast as they can.
For example, take the argument that in the anchor leg you know exactly what needs to be done. Inherent in this argument is the view that swimmers swim just fast enough to get the job done.
(That tends to sound wrong because we don’t think of competitive athletes as shirkers. But don’t get drawn in by the framing. If you like, say it this way: when the competition demands it, they “rise to the occasion.” Whichever way you say it, they put in more or less effort depending on the competition. And one does not have to interpret this as a cold calculation trading off performance versus effort. Call it race psychology, competitive spirit, whatever. It amounts to the same thing: you swim faster when you need to and therefore slower when you don’t.)
But even so its not obvious why this by itself is an argument for putting the fastest last. So let’s think it through. Suppose the relay has two legs. The guy who goes first knows how much of an advantage the opposing team has in the anchor leg and therefore doesn’t he know the amount by which he has to beat the opponent in the opening leg?
No, for two reasons. First, at best he can know the average gap he needs to finish with. But the anchor leg opponent might have an unusually good swim (or the anchor teammate might have a bad one.) Without knowing how that will turn out, the opening leg swimmer trades off additional effort in return for winning against better and better (correspondingly less and less likely) possible performance by the anchor opponent. He correctly discounts the unlikely event that the anchor opponent has a very good race, but if he knew that was going to happen he would swim faster.
The anchor swimmer gets to see when that happens. So the anchor swimmer knows when to swim faster. (Again this would be irrelevant if they were always swimming at top speed.)
The other reason is similar. You can’t see behind you (or at least your rear-ward view is severely limited.) The opening leg swimmer can only know that he is ahead of his opponent, but not by how much. If his goal is to beat the opening leg opponent by a certain distance, he can only hope to do this on average. He would like to swim faster when the opening leg opponent is behind but doing better than average. The anchor swimmer sees the gap when he takes over. Again he has more information.
There is still one step missing in the argument. Why is it the fastest swimmer who makes best use of the information? Because he can swim faster right? It’s not that simple and indeed we need an assumption about what is implied by being “the fastest.” Consider a couple more examples.
Suppose the team consists of one swimmer who has only one speed and it is very fast and another swimmer who has two speeds, both slower than his teammate. In this case you want the slower swimmer to swim with more information. Because in this case the faster swimmer can make no use of it.
For another example, suppose that the two teammates have the same two speeds but the first teammate finds it takes less effort to jump into the higher gear. Then here again you want the second swimmer to anchor. But this time it is because he gets the greater incentive boost. You just tell the first swimmer to swim at top speed and you rely on the “spirit of competition” to kick the second swimmer into high gear when he’s behind.
More generally, in order for it to be optimal to put the fastest swimmer in the anchor leg it must be that faster also means a greater range of speeds and correspondingly more effort to reach the upper end of that range. The anchor swimmer should be the team’s top under-achiever.
Exercises:
- What happens in a running-backwards relay race? Or a backstroke relay (which I don’t think exists.)
- In a swimming relay with 4 teammates why is it conventional strategy to put the slowest swimmer third?
Government organizations often compete not cooperate. They compete for funding from the central government and if say the C.I.A. succeeds in some task and the N.C.T.C. does not, money, status, access etc. might move naturally towards the former from the latter. If the N.C.T.C. helps the C.I.A. catch a terrorist, ironically, their own hard work is punished. On the other hand, competition helps to give the bureaucracies the incentive to work hard. That is, the positive effect that must be counterbalanced against the negative effect on incentives to cooperate. What is the optimal incentive scheme?
This seems like a pretty important question and someone has studied an important part of it. The classic paper is Hideshi Itoh’s Incentives to help in Multi-Agent Situations.
Suppose the marginal cost of helping is zero at zero effort of helping. Then, if one agent’s help reduces the other’s marginal cost of effort at his main task, it is optimal to incentivize teamwork. How do you do that? One agent has to be paid when the other succeeds. The assumptions that efforts are complements and that the marginal cost of help is zero at zero do not seem to be a big stretch in the present circumstances. The benefits of greater competition, lower resource costs, must be traded off against the costs, less cooperation and hence more chance of a successful terrorist attack if “dots are not connected” across organizations.
Itoh also shows that if the marginal cost of helping is positive at zero help, the optimal scheme either involves total specialization or, more surprisingly, substantial teamwork. This is because giving agents the incentive to help each other just a little is very costly, given the cost condition. So, if you are going to incentivize teamwork at all, it is optimal incentivize large chunks of it. If the benefits of catching terrorists is large, this logic also pushes the optimal scheme towards teamwork.
With much information classified, it is impossible to know how much intra-bureaucracy competition contributed to intelligence failure. But whether it did not or not, it is worth ensuring that good mechanisms for cooperation are in place.
Tyler Cowen, quoting Ezra Klein on “penalties” for failing to purchase private insurance:
If you don’t have employer-based coverage, Medicare, Medicaid, or anything else, and premiums won’t cost more than 8 percent of your monthly income, and you refuse to purchase insurance, at that point, you will be assessed a penalty of up to 2 percent of your annual income. In return for that, you get guaranteed treatment at hospitals and an insurance system that allows you to purchase full coverage the moment you decide you actually need it. In the current system, if you don’t buy insurance, and then find you need it, you’ll likely never be able to buy insurance again. There’s a very good case to be made, in fact, that paying the 2 percent penalty is the best deal in the bill.
Via The Volokh Conspiracy, I enjoyed this discussion of the NFL instant replay system. A call made on the field can only be overturned if the replay reveals conclusive evidence that the call was in error. Legal scholarship has debated the merits of such a system of appeals relative to the alternative of de novo review: the appelate body considers the case anew and is not bound by the decision below.
If standards of review are essentially a way of allocating decisionmaking authority between trial and appellate courts based on their relative strengths, then it probably makes sense that the former get primary control over factfinding and trial management (i.e., their decisions on those matters are subject only to clear error or abuse of discretion review), while the latter get a fresh crack at purely “legal” issues (i.e., such issues are reviewed de novo). Heightened standards of review apply in areas where trial courts are in the best place to make correct decisions.
These arguments don’t seem to apply to instant replay review. The replay presumably is a better document of the facts than the realtime view of the referee. But not always. Perhaps the argument against in favor of deference to the field judge is that it allows the final verdict to depend on the additional evidence from the replay only when the replay angle is better than that of the referee.
That argument works only if we hold constant the judgment of the referee on the field. The problem is that the deferential system alters his incentives due to the general principle that it is impossible to prove a negative. For example consider the (reviewable) call of whether a player’s knee was down due to contact from an opposing player. Instant replay can prove that the knee was down but it cannot prove the negative that the knee was not down. (There will be some moments when the view is obscured, we cannot be sure that the angle was right, etc.)
Suppose the referee on the field is not sure and thinks that with 50% probability the knee was down. Consider what happens if he calls the runner down by contact. Because it is impossible to prove the negative, the call will almost surely not be overturned and so with 100% probability the verdict will be that he was down (even though that is true with only 50% probability.)
Consider instead what happens if the referee does not blow the whistle and allows the play to proceed. If the call is challenged and the knee was in fact down, then the replay will very likely reveal that. If not, not. The final verdict will be highly correlated with the truth.
So the deferential system means that a field referee who wants the right decision made will strictly prefer a non-call when he is unsure. More generally this means that his threshold for making a definitive call is higher than what it would be in the absence of replay. This probably could be verified with data.
On the other hand, de novo review means that, conditional on review, the call made on the field has no bearing. This means that the referee will always make his decision under the assumption that his decision will be the one enforced. That would ensure he has exactly the right incentives.
Saddam promoted incompetents in his army deliberately, believing they would be less likely to sponsor a coup. There is a similar process that can operate within firms, the Peter Principle: If firms automatically promote the best performer at level k of the hierarchy to the level k+1, people will be promoted till they find their level of incompetence. Saddam’s promotion policy can be justified on rational choice grounds and similarly we might ask how firms can counteract the logic underlying the Peter Principle.
The New York Times magazine has a section on interesting ideas of the year. One of them concerns the Peter Principle. A group of Italian physicists did a computer simulation with various promotion policies. Random promotion outperformed a “promote the best” policy. It increases the chance that someone who is actually good at the job makes it to the next level. This seems pretty straightforward and eminently amenable to a simple analytical model. But peer review is even better than random promotion: ask the co-workers who might be good at the higher level job. If they have big incentives to lie, at worst you can ignore them and get random promotion as the optimal policy. Or better, share some of the rents from promoting the right person with the reviewers and get some useful information out of them.
These are old ideas from contract theory but we are clearly not doing a good job at getting our insights to the New York Times. On that note, let me congratulate Dan Ariely and his co-authors who have at least three of the best ideas of 2009. The experiment involving the drunks playing the ultimatum game was the most fun – won’t give the point away so you can enjoy it yourself! But it makes me think Jeff and I should do some experiments in our wine club. I wonder if we can get the NSF to support it so I can finally taste a Petrus.
I am going to write an insurance contract with you. We agree that if an accident happens I will cover you unless you a type prone to accident or you don’t try hard enough to avoid an accident.
There are two ways we can implement these conditionals. We could investigate whether they hold at the time we sign the contract or we could wait until an accident happens. Since an accident is unlikely to happen, the second method often avoids unnecessary costs of investigation and makes it cheaper to enforce the contract. That means I can offer you a lower premium.
All of this assumes that the conditions that would nullify the contract are
- completely described either in the contract or in law,
- fully understood by the insured, and
- within the information set of the insured
Arguments against rescision can be based on any one of these three being violated. Certainly #1 is violated in practice if insurance companies are free to present any evidence suggestive of, for example, pre-existing conditions. Violations of #2 plagues all economic analysis of contracts, no doubt it is a problem here as well. And an argument against recision could be based on #3, even if we assume that contracts are complete and voluntary.
Indeed I believe that #3 is the basis for a response to the obvious “If allowing for recision is against the interests of the insured, why don’t insurers compete by offering no-rescision contracts.” These would have higher premiums for the reasons given above. Insurees would tend to reject these in favor of lower-premium rescision-permitting contracts when they are not aware of buried, jargon-laden, medical records that would nullify their coverage. In fact, the asymmetric access to this information superimposes an artificial asymmetric information problem on an insurance market that is already plagued by adverse selection.
Here’s a research problem: what does a competitive equilibrium look like in an insurance market where both types of contract can be offered? Assume that the insurer has superior information about documentation of pre-existing conditions, and conditions contract terms on its private information.
You are playing in you local club golf tournament, getting ready to tee off and there is last-minute addition to the field… Tiger Woods. Will you play better or worse?
The theory of tournaments is an application of game theory used to study how workers respond when you make them compete with one another. Professional sports are ideal natural laboratories where tournament theory can be tested. An intuitive idea is that if two contestants are unequal in ability but the tournament treats them equally, then both contestants should perform poorly (relative to the case when each is competing with a similarly-abled opponent.) The stronger player is very likely to win so the weaker player conserves his effort which in turn enables the stronger player to conserve his effort and still win.
There is a paper by Kellogg professor Jennifer Brown that examines this effect in professional golf tournaments. She compares how the average competitor performs when Tiger Woods is in the tournament relative to when he is not. Controlling for a variety of factors, Tiger Woods’ presence increases (i.e. worsens, remember this is golf) the score of the average golfer, even in the first round of the tournament.
There are actually two reasons why this should be true. First is the direct incentive effect mentioned above. The other is that lesser golfers should take more risks when they are facing tougher competition. Surprisingly, this is not evident in the data. (I take this to be bad news for the theory, but the paper doesn’t draw this conclusion.)
Also, since golf is a competition among many players and there are prizes for second, third etc., the theory does not necessarily imply a Tiger Woods effect. For example, consider the second-best player. For her, what matters is the drop-off in rewards as a player falls from first to second relative to second to third. If the latter is the steeper fall, then Tiger Woods’ presence makes her work harder. Since the paper looks at the average player, then what should matter is something like concavity vs. convexity of the prize schedule.
Also, remember the hypothesis is that both players phone it in. Unfortunately we don’t have a good control for this because we can’t make Tiger Woods play against himself. Perhaps the implied empirical hypothesis says something about the relative variance in the level of play. When Tiger Woods is having a bad season, competition is tighter and that makes him work harder, blunting the effect of the downturn. When he is having a good season, he slacks off again blunting the effect of the boom. By contrast, for the weaker player the incentive effects make his effort pro-cyclical, amplifying temporal variations in ability.
Jonah Lehrer (to whom my fedora is flipped) prefers a psychological explanation.
R. Duncan Luce has been elected fellow of the Econometric Society in the year 2009. He is 84. How could it take so long?
Here’s a model. There is a large set of economists and each year you have to decide which to admit to a select group of “fellows.” Assume away the problems of committee decision-making and say that an economist will be admitted if his achievements are above some standard. The problem is that there are many economists and its costly to investigate each one to see if they pass the bar.
So you pick a shortlist of candidates who are contenders and you investigate those. Some pass, some don’t. Now, the next problem is that there are many fellows and many non-fellows and its hard to keep track of exactly who is in and who is out. And again it’s costly to go and check every vita to find out who has not been admitted yet.
So when you pick your shortlist, you are including only economists who you think are not already fellows. Someone like Duncan Luce, who certainly should have been elected 30 years ago most likely was elected 30 years ago so you would never consider putting him on your shortlist.
Indeed, the simple rule of thumb you would use is to focus on young people for your shortlist. Younger economists are more likely to be both good enough and not already fellows.
There were no fire engines, horse-drawn or otherwise. The citizens were the fire department. Each house had its own firebuckets and in the event of a fire, everyone was meant to pitch in. That meant taking your firebucket and joining the line of people from the water tank to the fire.
Does the story so far give you a warm, fuzzy feeling? Friendly folk working together, helping each other out and living by the Kantian categorical imperative. Let me rain on your parade – I am an economist after all. The private provision of public goods is subject to a free-rider problem: The costs of helping someone else outweigh the direct benefits to me so I don’t do it. Everyone reasons the same way so we get the good old Prisoner’s Dilemma and a collectively worse equilibrium outcome.
People have to come up with some other mechanism to mitigate these incentives. In Concord, they chose a contractual solution. Each fire-bucket had the owner’s name and address on it. If any were missing from the fire, you could identify the free-rider and they were fined.
This is the story we got from the excellent tour guide at the Old Manse house in Concord. Home to William Emerson, rented by Nathaniel Hawthorne and overlooking the North Bridge, the location of the first battle of the American Revolution. (We were carefully told that earlier that same historic day in Lexington, although the Redcoats fired, the Minutemen did not fire back so that was not a real battle.) The house has the old firebuckets hanging up by the staircase.
We have spent most of the course using the tools of dominant-strategy mechanism design to understand efficient institutions and second-best tradeoffs. These topics have a normative flavor: they describe the limits of what could be achieved if institutions were designed with efficiency as the goal.
But most economic activity is regulated not by efficieny-motivated planners but by self-interested agents. This adds an additional friction which potentially moves us even further from the first-best. Self-interested mechanism designers will probably introduce new distortions into their mechanisms because as they try to tilt the distribution of surplus their way.
In this lecture we use the model of an auction to see the simplest version of this. We consider the problem of designing an auction for two bidders with the goal of maximizing revenue rather than efficiency. We do not have the tools necessary to do the full-blown optimal auction problem but we can get intuition by studying a narrower problem: find an optimal reserve price in an English auction.
With a diagram we can see the tradeoffs arising from adjusting the reserve price above the efficient level. The seller loses because sometimes the good will go unsold but in return he gains from receiving a higher price when the good is sold. The size and shape of the regions where these gains and losses occur suggest that it should be profitable to raise the reserve price above cost.
Without solving explicitly for the optimal reserve price we can give a pretty compelling, albeit not 100% formal, argument that this is indeed the case. At the efficient reserve price (equal to the cost of selling) total surplus is maximized. A graph of total expected surplus as a function of the reserve price should be locally flat at the efficient point. (We are implicitly assuming differentiability of total expected surplus which holds if the distribution of bidder values is nice.) Buyers’ utility is unambigously declining when the reserve price increases. Since total surplus is by definition the sum of buyers’ utility and seller profit, it follows that seller profit is locally increasing as the reserve price is raised above the efficient level.
Thus, while we know that in principle this allocation problem can be solved efficiently, when the allocation is controlled by a profit maximizer, there is a new source of inefficiency. The natural next question is whether competition among profit-maximizing sellers will mitigate this.
We all know about generic drugs and their brand-name counterparts. The identical chemical with two different prices. Health insurance companies try to keep costs down by incentivizing patients to choose generics. You have a larger co-pay when you buy the name brand. Except when you don’t:
Serra, a paralegal, went to his doctor a few months ago for help with acne. She prescribed Solodyn. Serra told her he’d previously taken a generic drug called minocycline that worked well. The doctor told him that the two compounds are basically the same, but that you have to take the generic version in the morning and the evening. With Solodyn, you take one dose a day.
Serra told her that if the name-brand medicine was going to cost a lot more, he’d prefer the generic. “And then she presented this card,” he says. She explained that it was a coupon, and that he should give it to the pharmacist for a break on his insurance copay.
Without the card, Serra’s copay would have been $154.28. But when he got to the pharmacy, he presented his card. “They went to ring it up at the register,” he remembers. “And when it came up, the price was $10.”
NPR has the story. Chupulla chuck: Mike Whinston.
