You are currently browsing the tag archive for the ‘incentives’ tag.

The eternal Kevin Bryan writes to me:

Consider an NFL team down 15 who scores very late in the game, as happened twice this weekend. Everybody kicks the extra point in that situation instead of going for two, and is then down 8.  But there is no conceivable “value of information” model that can account for this – you are just delaying the resolution of uncertainty (since you will go for two after the next touchdown).  Strange indeed.

Let me restate his puzzle.  If you are in a contest and success requires costly effort, you want to know the return on effort in order to make the most informed decision.  In the situation he describes if you go for the 2-pointer after the first touchdown you will learn something about the return on future effort.  If you make the 2 points you will know that another touchdown could win the game.  If you fail you will know that you are better off saving your effort (avoiding the risk of injury, getting backups some playing time, etc.)

If instead you kick the extra point and wait until a second touchdown before going for two there is a chance that all that effort is wasted.  Avoiding that wasted effort is the value of information.

The upshot is that a decision-maker always wants information to be revealed as soon as possible.  But in football there is a separation between management and labor.  The coach calls the plays but the players spend the effort.  The coach internalizes some but not all of the players’ cost of effort. This can make the value of information negative.

Suppose that both the coach and the players want maximum effort whenever the probability of winning is above some threshold, and no effort when its below.  Because the coach internalizes less of the cost of effort, his threshold is lower.  That is, if the probability of winning falls into the intermediate range below the players’ threshold and above the coach’s threshold, the coach still wants effort from them but the players give up.  Finally, suppose that after the first touchdown the probability of winning is above both thresholds.

Then the coach will optimally choose to delay the resolution of uncertainty.  Because going for two is either going to move the probability up or down.  Moving it up has no effect since the players are already giving maximum effort.  Moving it down runs the risk of it landing in that intermediate area where the players and coach have conflicting incentives.  Instead by taking the extra point the coach gets maximum effort for sure.


Mitt Romney and Paul Ryan have proposed a plan to allow private firms to compete with Medicare to provide healthcare to retirees. Beginning in 2023, all retirees would get a payment from the federal government to choose either Medicare or a private plan.  The contribution would be set at the second lowest bid made by any approved plan.

Competition has brought us cheap high definition TVs, personal computers and other electronic goods but it won’t give us cheap healthcare.  The healthcare market is complex because some individuals are more likely to require healthcare than others.  The first point is that as firms target their plans to the healthy, competition is more likely to increase costs than lower them.  David Cutler and Peter Orzag have made this argument.  But there is a second point: the same factors that lead to higher healthcare costs also work against competition between Medicare and private plans.  Unlike producers of HDTVs, private plans will not cut prices to attract more consumers so competition will not reduce the price of Medicare.  A simple example exposes the logic of these two arguments.

Suppose there are two couples, Harry and Louise and Larry and Harriet.  Harry and Louise have a healthy lifestyle and won’t need much healthcare but Larry and Harriet are unhealthy and are likely to require costly treatments in the future.  Let’s say the Medicare price is $25,000/head as this gives Medicare “zero profits”.  Harry and Louise incur much lower costs than this and Larry and Harriet much higher.  Therefore, at the federal contribution, private plans make a profit if they insure Harry and Louise and a loss if they insure Larry and Harriet.  So, private providers will insure the former and reject the latter. Or their plans deliberately exclude medical treatments that Larry and Harriet might need to discourage them from joining.  The overall effect will be to increase healthcare costs. This is because Harry and Louise get premium support of $50,00 total that is greater than the healthcare costs they incur now so they impose higher costs on the federal government than they do currently.  Larry and Harriet will be excluded by the private plans and will get coverage from Medicare.  This will cost more than $50,000 total so there will be no cost savings from them either.  Total costs will be higher than $100,00 as surplus is being handed over to Harry and Louise and their insurance companies.

To deal with this cream-skimming, we might regulate the marketplace.  It might seem to make sense to require open enrollment to all private plans and stipulate that all plans at a minimum have the same benefits as the traditional Medicare plan.  Indeed, the Romney/Ryan plan includes these two regulations.  But this just creates a new problem.

Suppose the Medicare plan and all the private plans are being sold at the same price.  The private plans target marketing at healthy individuals like Harry and Louise and include benefits such as “free” gym membership that are more likely to appeal to them. Hence, they still cream-skim to some extent and achieve a better selection of participants than the traditional public option.  (This is actually the kind of thing that happens in the current Medicare Advantage system. Sarah Kliff has an article about it and Mark Duggan et al have an academic working paper studying Medicare Advantage in some detail.)  So total healthcare costs will again be higher than in the traditional Medicare system.

But there is an additional effect.  Traditional competitive analysis would predict that one private plan or another will undercut the other plans to get more sales and make more profits. This is the process that gives us cheap HDTVs. The hope is that similar price competition should reduce the costs of healthcare. Unfortunately, competition will not work in this way in the healthcare market because of adverse selection.

Going back to our story, if one plan is cheaper than the others priced at say $20,000, it will attract huge interest, both from healthy Harry and Louise but also from unhealthy Larry and Harriet.  After all, by law, it must offer the same minimum basket of benefits as all the other plans.  So everyone will want to choose the cheaper plan because they get same minimum benefits anyway.  Also by law, the plan must accept everyone who applies including Larry and Harriet.  So, while the cheapest plan will get lots of demand, it will attract unhealthy individuals whom the insurer would prefer to exclude – this is adverse selection.  Insurers get a better shot at excluding Larry and Harriet if they keep their price high and dump them on Medicare.  This means profits of private plans might actually be higher if the price is kept high and equal to the other plans and the business strategy focused on ensuring good selection rather than low prices.  An HDTV producer doesn’t face any strange incentives like this– for them a sale is a sale and there is no threat of future costs from bad selection.

So, adverse selection prevents the kind of competition that lowers prices.  The invisible hand of the market cannot reduce costs of provision by replacing the visible hand of the government.

The difference between cycling and badminton:

“I just crashed, I did it on purpose to get a restart, just to have the fastest ride. I did it. So it was all planned, really,” Hindes reportedly said immediately after the race. He modified his comments at the official news conference to say he lost control of his bike.”

The opposition took it in stride:

French officials did not formally complain about the British tactic.

“You have to make the most of the rules. You have to play with them in a competition and no one should complain about that,” the France team’s technical director, Isabelle Gautheron, told The Associated Press.


“He (Hindes) should not have told the truth,” Daniel Morelon, a Frenchman who coaches the China team, told the AP. “It’s part of the game, but you should not tell others.”

Eight female badminton players were disqualified from the Olympics on Wednesday for trying to lose matches the day before, the Badminton World Federation announced after a disciplinary hearing.

The players from China, South Korea and Indonesia were accused of playing to lose in order to face easier opponents in future matches, drawing boos from spectators and warnings from match officials Tuesday night.

All four pairs of players were charged with not doing their best to win a match and abusing or demeaning the sport.

Apparently the Badminton competition has the typical structure of a preliminary round followed by an elimination tournament.  Performance in the preliminary round determines seeding in the elimination tournament.  The Chinese and South Korean teams had already qualified for the elimination tournament but wanted to lose their final qualifying match in order to get a worse seeding in the elimination tournament.  They must have expected to face easier competition with the worse seeding.

This widely-used system is not incentive-compatible.  This is a problem with every sport that uses a seeded elimination tournament.  Economist/Market Designers have fixed Public School Matching and Kidney Exchange, let’s fix tournament seeding.  Here are two examples to illustrate the issue:

1. Suppose there are only three teams in the competition.  Then the elimination tournament will have two teams play in a first elimination round and the remaining team will have a “bye” and face the winner in the final.  This system is incentive compatible.  Having the bye is unambiguously desirable so all teams will play their best in the qualifying to try and win the bye.

2. Now suppose there are four teams.  The typical way to seed the elimination tournament is to put the top performing team against the worst-performing team in one match and the middle two teams in the other match.  But what if the best team in the tournament has bad luck in the qualifying and will be seeded fourth.  Then no team wants to win the top seed and there will be sandbagging.

As I see it the basic problem is that the seeding is too rigid.  One way to try and improve the system is to give the teams some control over their seeding after the qualifying round is over.  For example, we order the teams by their performance then we allow the top team to choose its seed, then the second team chooses, etc. The challenge in designing such a system is to make this seed-selection stage incentive-compatible.  The risk is that the top team chooses a seed and then after all others have chosen theirs the top team regrets its choice and wants to switch.  If the top team foresees this possibility it may not have a clear choice and this instability is not only problematic in itself but could ruin qualifying-round incentives again.

So that is the question.  As far as I know there is no literature on this.  Let’s us, the Cheap Talk community, solve this problem.  Give your analysis in the comments and if we come up with a good answer we will all be co-authors.

UPDATE:  It seems we have a mechanism which solves some problems but not all and a strong conjecture that no mechanism can do much better than ours.  GM was the first to suggest that teams select their opponents with higher qualifiers selecting earlier and Will proposed the recursive version.  (alex, AG, and Hanzhe Zhang had similar proposals) The mechanism, lets call it GMW, works like this:

The qualifiers are ranked in descending order of qualifying results.  (In case the qualifying stage produces only a partial ranking, as is the case with the group stages in the FIFA World Cup, we complete the ranking by randomly ordering within classes.)  In the first round of the elimination stage the top qualifier chooses his opponent.  The second qualifier (if we was not chosen!) then chooses his opponent from the teams that remain.  This continues until the teams are paired up.  In the second round of elimination we pair teams via the same procedure again ordering the surviving teams according to their performance in the qualifying stage.  This process repeats until the final.

It was pointed out by David Miller (also JWH with a concrete example, and afinetheorem) that GMW is not going to satisfy the strongest version of our incentive compatibility condition and indeed no mechanism can.

Let me try to formalize the positive and negative result.  Let’s consider two versions of No Envy.  They are strong and weak versions of a requirement that no team should want to have a lower ranking after qualifying.

Weak No Envy:  Let P_k(r,h) be the pairing that results in stage k of the elimination procedure when the ordering of teams after the qualifying stage was r and the history of eliminations prior to stage k is given by h.  Let r’ be the ordering obtained by altering r by moving team x to some lower position without altering the relative ordering of all other teams.  We insist that for every r, k, h, and x, the pairing P_k(r,h) is preferred by team x to the pairing P_k(r’,h).

Strong No Envy:  Let r’ be an ordering that obtains by moving team x to some lower position and possibly also altering the relative positions of other teams.  We insist that for every r,k,h, and x, the pairing P_k(r,h) is preferred by team x to P_k(r’,h).

GMW satisfies Weak No Envy but no mechanism satisfies Strong No Envy.  (The latter is not quite a formal statement because it could be that the teams pairing choices, which come from the exogenous relative strengths of teams, make Strong No Envy hold “by accident.”  We really want No Envy to hold for every possible pattern of relative strengths.)

One could also weaken Strong No Envy and still get impossibility.  The interesting impossibility result would find exactly the kind of reorderings r->r’ that cause problems.

Finally, we considered a second desideratum like strategy-proofness.  We want the mechanism that determines the seedings to be solvable in dominant strategies.  Note that this is not really an issue when the teams are strictly ordered in objective strength and this ordering is common knowledge.  It becomes an issue when there is some incomplete information (an issue raised by AG, and maybe also when there are heterogeneous strengths and weaknesses, also mentioned by AG.)

Formalizing this may bring up some new issues but it appears that GMW is strategyproof even with incomplete information about teams strengths and weaknesses.

Finally, there are some interesting miscellaneous ideas brought up by Scott (you can unambiguously improve any existing system by allowing a team who wins a qualifying match to choose to be recorded as the loser of the match) and DRDR (you minimize sandbagging, although you don’t eliminate it, by having a group format for qualifiers and randomly pairing groups ex post to determine the elimination matchups, this was also suggested by Erik, ASt and SX.)

Here’s a model of self-confidence. People meet you and they decide if they admire/respect/lust after you. You can tell if they do. When they do you learn that you are more admirable/respectable/attractive than you previously knew you were. Knowing this increases your expectation that the next person will react the same way. That means that when you meet the next person you are less nervous about how they will judge you. This is self-confidence.

Your self-confidence makes a visible impression on that next person. And it’s no accident that your self-confidence makes them admire/respect/lust after you more than they would if you were less self-confident. Because your self-confidence reveals that the last person felt the same way. When trying to figure out whether you are someone worthy of admiration respect or lust, it is valuable information to know how other people decided because people have similar tastes on those dimensions.

And of course it works in the opposite direction too. People who are judged negatively lose self-confidence and their unease is visible to others and makes a poor impression.

For this system to work well it must escape herding and prevent manipulation. Herding would be a problem if confident people ignore that others admire them only because they are confident and they allow these episodes to further fuel their confidence. I believe that the self-confidence mechanism is more sophisticated than this. Celebrities complain about being unable to have real relationships with regular people because regular people are unable to treat celebrities like regular people. A corollary of this is that a celebrity does not gain any more confidence from being mobbed by fans. A top-seeded tennis player doesn’t gain any further boost in confidence from a win over a low-ranked opponent who wilts on the court out of awe and intimidation.

Herding may be harder to avoid on the downside. If people who lack confidence are shunned they may never get the opportunity to prove themselves and escape the confidence trap.

And notwithstanding self-help books that teach you tricks to artificially boost your self-confidence, I don’t think manipulation is a problem either. Confidence is an entry, nothing more.  When you are confident people are more willing to get to know you better. But once they do they will learn whether your self-confidence is justified. If it isn’t you may be worse off than if you never had the entry in the first place.

Drawing:  Life is a Zen Roller Coaster from

I have a theory that your siblings determine how tidy you are in your adult life, but I am not exactly sure how it all works out.

My theory is based on public goods and free-riding.  If as a kid you shared a room then you and your sibling didn’t internalize the full marginal social value of your efforts at tidying up and as a consequence your room was probably a mess.  At least messier than it would have been if you had the room to yourself.

This would suggest that if you want to know whether your girlfriend is going to be a tidy roommate when you shack up one easy clue is whether she has a sister.  If not then she probably had a room to herself and she is probably accustomed to tidiness.

But here is where I start to think it can go the other way. A kid who shares a room needs to adapt to the free-rider problem. It pays off if she can develop a tit-for-tat strategy with her sibling to maintain incentives for mutual tidiness. This kind of behavioral response is most credible when it stems from an innate preference for cleanliness. Bottom line, it can be optimal for a room-sharing sibling to become more fussy about a clean room.

As I said I am not sure how it all balances out. But I have a few data points. I have two brothers and we were all slobs as kids but now I am very tidy. My wife has no sisters and if I put enough negatives in this sentence then when she reads this it will be hard for her to figure out how untidy I am herein denying she never fails not to be.

Her brothers also cannot be accused of coming dangerously close to godliness either but I don’t think they shared a room much as kids. One of them now lives in an enviably tidy home but I credit that to his wife who I believe grew up with two sisters.

My son has his own room and at age 5 he is already the cleanest person in our house. He is also the best dressed so there may be something more going on there. My two daughters have been known to occasionally tunnel through the pile of laundry on their (shared) floor just to remind themselves of the color of their carpet.

I have a cousin whom I once predicted would eventually check herself into a padded cell mainly because those things are impeccably tidy. She always had her own room as a kid. Sandeep is an interesting case because as far as I know he has no brothers and while his home sparkles (at least whenever they are having guests) his office is appalling.

College sports. The NBA and the NFL, two of the most sought-after professional sports in the United States outsource the scouting and training of young talent to college athletics programs. And because the vast majority of professionals are recruited out of college the competition for professional placement continues four years longer than it would if there were no college sports.

The very best athletes play basketball and football in college, but only a tiny percentage of them will make it as professionals. If professionals were recruited out of high school then those that don’t make it would find out four years earlier than they do now. Many of them would look to other sports where they still have chances. Better athletes would go into soccer at earlier ages.

As long as college athletics programs serve as the unofficial farm teams for professional basketball and football, many top athletes won’t have enough incentive to try soccer as a career until it is already too late for them.

Embedded in a retrospective of James Q. Wilson.  Its worth reading the whole thing, here is just one excerpt.

Call me unforgiving, but I can still remember sitting at Jim and Roberta Wilson’s dinner table in Malibu, California in January 1993 listening to Murray explain, much to my consternation and with Jim’s silent acquiescence, that social inequality is inevitable because “dull” parents are simply less effective at child-rearing than “bright” ones. (I rejected then, and still do, Murray and Herrnstein’s claim that profound social disparities are due mainly to variation in innate individual traits that cannot be remedied via social policy.) Neither can Glenn Loury in 2012 ignore what he failed to see in 1983: that Wilson and Herrnstein’s Crime and Human Nature—a book that sets out to lay bare the underlying bio-genetic, somatic, and psychological determinants of individuals’ criminal behavior—is an enterprise of dubious scientific value. The behavioral theories of social control that Wilson spawned—see, for instance, his 1983 Atlantic Monthly piece, “Raising Kids” (not unlike training pets, as it happens)—and the pop–social psychology salesmanship of his and George Kelling’s so-called “theory” about broken windows is a long way from rocket science, or even good social science. This work looks more like narrative in the service of rationalizing and justifying hierarchy, subordination, coercion, and control. In short, it smacks of highbrow, reactionary journalism.

Kidney exchanges have saved many lives since economists Al Roth, Tayfun Sonmez, Utku Unver, and Atila Abdulkadiroglu first proposed them and then convinced doctors and hospitals to embrace them.

In paired kidney exchanges the transaction involves multiple pairs of patients. Each pair consists of a kidney patient who will receive a kidney, and a donor, typically a family member, who will give one. Each pair is incompatible: because of a blood-type or tissue-type mismatch the patient would reject the donor’s kidney.  The exchange works by creating a cycle of patients and donors who are compatible. For example, patient A’s wife donates her kidney to patient B whose husband donates his kidney back to patient A. Even longer cycles are possible.

As a rule all of  the transplantation operations in any paired exchange are carried out simultaneously and in the same hospital. This acts as a guarantee to each donor that they will give their kidney if and only if their loved one also receives one. If some donor along the cycle becomes ill or gets cold feet, the entire cycle is halted before it begins. Such a guarantee surely makes patients more willing to participate in the exchange but it also limits the size of the cycle since there is a limit to the number of surgeries that any one hospital can support.

Then there are the chain exchanges. Here, without any paired patient to receive a kidney in return, a good samaritan comes forth and offers to donate his kidney to any compatible stranger.  This good samaritan is going to save somebody’s life. And through the power of exchange, possibly many more than just one life. Because instead of just an arbitrary compatible recipient, the kidney can be given to a patient paired with a donor whose kidney is compatible with another patient paired with a donor whose kidney is compatible with…  That is, the good samaritan can activate a long chain of transplants that otherwise could not be completed by paired exchange because the chain of compatibility did not cycle back to its beginning.

The kidney exchange economists noticed a subtle difference between paired and chain exchanges. And based on their observation they convinced doctors to relax the rule on simultaneous surgeries in the case of chain exchanges. The ever-increasing record length chains of kidney transplants are only possible because of this.

Why were doctors willing to do sequential surgeries for chained exchanges while they insisted on simultaneity for paired exchanges? It’s not because they have any less concern that the chain would be broken before all patients receive their promised kidneys. It’s not because extending the size of a cycle is any less of a blessing than extending the length of a chain. The difference that the economists noticed can be boiled down to an esoteric concept known to mechanism designers as individual rationality.  

When a paired exchange cycle is broken because one surgery along the line is not carried out, one patient is necessarily made worse off than he would have been if the exchange had never happened.  Because that patient’s loved one has given her kidney and not only has the patient not received any kidney in return, but his donor no longer has a kidney to give. The patient has lost bargaining power in the kidney exchange market going forward.  The anticipation of this possibility would make patients and donors reluctant to participate in an exchange in the first instance.

By contrast, when the sequence of transplants in a chain is halted, every patient-donor pair who gave their kidney to the next patient downstream in the chain already received one from the previous upstream donor. Yes the patients at the end of the chain do not receive their promised kidneys but they are no worse off than if the chain had never been planned in the first place. Without any threat to individual rationality there is no reason not to extend the chain of surgeries as long as imaginable capitalizing on the original good samaritan’s altruism as much as compatibility allows.

Tayfun Sonmez is here at Northwestern giving a mini-course on market design, here are his lecture slides including a lecture on kidney exchange.

Over the weekend I attended a conference at the University of Chicago on The Biological Basis of Preferences and Behavior, and Balazs Szentes stole the show with a new theory of the peacock’s tail.  In Balazs’ theory a world without large and colorful peacock plumage is simply not stable.

A large tail is an evolutionary disadvantage:  it serves no useful purpose and it slows down the male and makes him conspicuous to predators.  So why do genes for large tails appear and take over the population of male peacocks? Balazs’ answer is based on matching frictions in the peacock mating market. Suppose female peacocks choose which type of male peacock to mate with: small or large tails. Once the females sort themselves across these two separate markets, the peacocks are matched and they mate.

The female peacocks are differentiated by health, and within a peacock couple health partially compensates for the disadvantageous tail. In the model this means that healthy females who mate with big-tailed peacocks will produce almost as many surviving offspring as they would if they mated with peacocks without the disadvantage of the tail.

This substitution between the characteristics of female and male peacocks creates a selection effect in the mating market. Consider what happens when a small-tailed peacock population is invaded  by a mutation which gives some male peacocks large tails. Since female peacocks make up half the population of peacocks there is now an imbalance in the market for small-tailed peacocks. In particular the males are in excess demand and some females will have trouble finding a mate.

On the other hand the big-tailed male peacocks are there for the taking and its going to be the healthy female peacocks who will have the greatest incentive to switch to the market for big tail. The small cost they pay in terms of reduced quantity of offspring will be offset by their increased chance of mating. The big tails have successfully invaded.

Once they have taken over the population (Balasz shows that under his conditions there is no equilibrium with two kinds of male peacocks) he same selection effect prevents small tails from invading. When a small-tail mutation appears all the females will want to mate with them. The market for small tail gets flooded with eager females up to the point where some of them are going to have a hard time finding a mate. Given this, each female must decide whether to take a gamble and try to mate with the small-tail male or have a sure chance of mating with a large tail.

The unhealthy females are going to be the ones who are most willing to take the risk because they are the least compatible with the large-tail males. This means that the small-tail mutants can only mate with unhealthy females and (under the conditions Balazs identifies) this more than offsets their advantage, they produce fewer offspring than the large-tails and they are driven out of the population.

My brother-in-law wanted to sell something with an auction but first he wanted to assemble as many interested buyers as he could. His problem is that while he knew there were many interested buyers in the world he didn’t know who they were or how to find them. But he had a good idea:  people who are interested in his product probably know other people who are also interested. He asked me for advice on how to use finders’ fees to incentivize the buyers he already know about to introduce him to any new potential buyers they know.

This is a very interesting problem because it interacts two different incentive issues. First, to get someone to refer you to someone they know you have to confront a traditional bilateral monopoly problem. You are a monopoly seller of your product but your referrer is a monopoly provider of access to his friend because only he knows which and how many of his friends are interested. If your finder’s fee is going to work it’s going to have to give him his monopoly rents.

The interesting twist is that your referrer has an especially strong incentive not to give you any references. Because anybody he introduces to you is just going to wind up being competition for him in the auction for your product.  So your finder’s fee has to be even more generous in order to compensate your referrer for the inevitable reduction in the consumer’s surplus he was expecting from the auction.

I told my brother-in-law not to use finder’s fees.  That can’t be the optimal way to solve his problem.  Because there is another instrument he has at his disposal which must be the more efficient way to deal with this compound incentive problem.

Here’s the problem with finder’s fees.  Every dollar of encouragement I give to my buyers is going to cost me a full dollar.  But I have a way to give him a dollar’s worth of encouragement at a cost to me of strictly less than a dollar.  I leverage my monopoly power and I use the object I am selling as the carrot.

In fact there is a basic principle here which explains not only why finder’s fees are bad incentive devices but also why employers give compensation in the form of employee discounts, why airlines use frequent flier miles as kickbacks and why a retailer would always prefer to give you store credit rather than cash refunds. It costs them less than a dollar to provide you with a dollar’s value.

Why is that?  Because any agent with market power inefficiently under-provides his product.  By setting high prices, he creates a wedge between his cost of supplying the good and your value for receiving it.  If he wants to do you a favor he could either give you cash or he could give you the cash value in product.  It’s always cheaper to do the latter.

So what does this say about incentivizing referrals to an auction?  How do you “use the object” in place of a finder’s fee?  The optimal way to do that is the following.  You tell your potential referrer that you will give him an advantage in the auction if he brings to you a new potential buyer.  Because you are a monopoly auctioneer there is always a wedge that you can capitalize on to do this at minimal cost to yourself.

In this particular example the wedge is your reserve price.  Your referrer knows that you are going to extract your profits by setting a high reserve price and thereby committing not to sell the object if he is not willing to pay at least that much.  You will induce your referrer to bring in new competition by offering to lower his reserve price when he does.

Now of course you have to deal with the problem of collusion and shills.  Of course that’s a problem in any auction and even more of a problem with monetary finder’s fees but that’s a whole nuther post.

(Ongoing collaboration with Ahmad Peivandi)

Airlines are using ever more sophisticated pricing strategies, sports teams and theaters are adopting dynamic pricing, even restaurants are using auctions to allocate scarce seating space.  And the usual perception of this is that the consumer is being gouged.  Auctions leverage competition among buyers and this drives the price up.  Sellers are raising profits by eroding consumer surplus.

But as a counterpoint to this, here is a mostly unnoticed but fundamental principle of auction-like pricing schemes:  they lead to unambiguously lower prices at the margin even when, indeed especially when, the seller is a coldhearted profit maximizer.

Suppose a theater allocates seats by selling tickets.  And suppose they do it the old-fashined way:  they set a price for tickets and put them up for sale until they sell out.  Setting the right ticket price is a tricky problem because a price is a one-dimensional instrument that has to solve a two-sided problem.  On the one hand, you want high prices in order to capture revenue when demand turns out to be strong.  But on the other hand, you want low prices in order to ensure the theater isn’t empty when demand is weak.  A price is simply too limited an instrument to do that double duty.  It’s no wonder that there are so many empty seats on most days while on other days the show sells out way in advance.

An auction (or dynamic pricing or many other pricing systems) has built into it two separate mechanisms for handling those two separate problems.  First there is the mechanism that leverages competition.  When demand is strong buyers must compete with one another for limited space.  When that happens the price is being set not by the seller but by the buyers themselves.  A buyer wins a seat only if he is prepared to pay a price larger than the next most aggressive bidder.

The unsung virtue of the competition-leveraging aspect of auctions is that it relieves the other mechanism in an auction, the seller’s (reserve) price, of the burden of capturing revenue at the high-end of the market and allows the seller to use it for a single purpose:  to capture revenue when demand is low. And this necessarily leads the seller to reduce his reserve price below the price he would have set if he were just using prices and not auctions.

The reason follows from a simple marginal trade-off.  Think of what happens to the seller’s profit when he lowers his price a little. There are gains and losses. The gain is that the lower price leads to greater tickets sales when demand turns out to be low.  The loss is that when demand is already high enough to sell out at the original price he will sell the same number of tickets but at a lower price.  The seller’s optimal price is chosen to balance these gains and losses.

But with an auction the trade-off changes because the reserve price plays no role in determining revenues when demand is high. That’s when the buyers are setting their own prices.  Cutting reserve prices leads to all the same gains but strictly lower losses compared to cutting plain-old prices.

The upshot of this is that the winners and losers from an auction system aren’t who you think.  Auctions don’t favor the deep-pocketed compared to the small guys. Exactly the opposite.  The marginal consumer is priced out of the market when a seller eschews an auction because then he must keep prices high.  When a seller switches to an auction he lowers his reserve price and now the marginal consumer has a chance to buy at those low prices.

Helpful conversation with Toomas Hinnosaar acknowledged.

(Drawing:  I Persuade With Carrots from

I am catching up on my Mad Men viewing after a spring break trip abroad. I watched three episodes in one sitting last night. In Episode 3, copywriter Peggy interviews candidates for an open position. She likes the work of Michael Ginsburg whose portfolio is labelled “judge not, lest you be judged”.  Her co-worker agrees with Peggy’s assessment of Ginsburg’s work but advises her not to hire him because, if Ginsburg turns out to be a better copywrite than Peggy, she risks losing her job to him. Later in the episode (or was in the next?), Pete humiliates Roger, taking credit for winning an account for the advertising company. Roger storms out. He says he was good to Pete when he was young, recruited him and look how he is lording it over Roger now. A portend of Peggy’s future?

Recruiting and peer review are plagued with incentive problems in the presence of career concerns. If you recruit somebody good, you risk the chance that they replace you later on. You have an incentive to select bad candidates. You have an incentive to denigrate other people’s good work (the NIH syndrome) for even deliberately promote their bad work in the hope that it fails dramatically and this allows you to leap over them in some career race. The solution in academia is tenure. If you have a job for life, you can feel free to hire great candidates. (Various psychological phenomena such as insecurity compromise this solution of course!) Peggy does not have tenure and even Roger who is a partner faces the ignominy of playing second fiddle to a young upstart. Watch out Peggy!

Reality shows eliminate contestants one at a time. Shows like American Idol do this by holding a vote. The audience is asked to vote for their favorite contestant and the one with the fewest votes is eliminated.

Last week on American Idol something very surprising happened. The two singers who were considered to have given the best performances the night before, and who were strong favorites to win the whole thing received among the fewest votes. Indeed a very strong favorite, Jessica Sanchez was “voted off” and only survived because the judges kept her alive by using their one intervention of the season.

The problem in a nutshell is that American Idol voters are deciding whom to eliminate but instead of directly voting for the one they want to eliminate, they are asked to vote for the person they don’t want eliminated.  This creates highly problematic strategic incentives which can easily lead to a strong favorite being eliminated.

For example suppose that a large number prefers contestant S to all others. But while they agree on S, they disagree about the ranking of the other contestants and they are interested in keeping their second and third favorites around too.

The supporters of S have a problem:  maintaining support for S is a public good which can be undermined by their private incentives.  In particular some of them might be worried that their second favorite contestant needs help. If so, and if they think that S has enough support from others, then they will switch their vote from S to help save that contestant. But if they fail to coordinate, and too many of the S supporters do this, then S is in danger of being eliminated.

This problem simply could not arise if American Idol instead asked audiences to vote out the contestant they want to see eliminated. Consider again the situation described above.  Yes there will still be incentives to vote strategically, indeed any voting system will give rise to some kind of manipulation.  But a strong favorite like S will be insulated from their effects. Here’s why.  An honest voter votes for the contestants she likes least.  A strategic voter might vote instead for her next-to-least favorite.  She might do this if she thinks that voting out her least-favorite is a wasted vote because not enough other people will vote similarly.  And she might do this if she thinks that one of her favorite contestants is a risk for elimination.

But no matter how she tries to manipulate the vote it will be shifting votes around for her lower-ranked contestants without undermining support for her favorite. Indeed it is a dominated strategy to vote against your favorite and so a heavily favored contestant like S could never be eliminated in a voting-out system as it can with the current voting-in system.

In most of the US there is “no-fault” divorce.  Either party can petition for divorce without having to demonstrate to the court any reason to legitimize the petition. The divorce is usually granted even if the other party wants to remain married.

In England, you must prove to the judge that there is valid reason for the divorce, even if both parties want to separate. This is particularly problematic when only one party wants to separate but doesn’t have a valid reason for it. Then they must make the marriage sufficiently unpleasant for the spouse so that the spouse will a) want a divorce and b) have a verifiable good reason for it.  For example:

One petition read: “The respondent insisted that his pet tarantula, Timmy, slept in a glass case next to the matrimonial bed,” even though his wife requested “that Timmy sleep elsewhere.”


The woman who sued for divorce because her husband insisted she dress in a Klingon costume and speak to him in Klingon. The man who declared that his wife had maliciously and repeatedly served him his least favorite dish, tuna casserole.

and most egregious of all

“The respondent husband repeatedly took charge of the remote television controller, endlessly flicking through channels and failing to stop at any channel requested by the petitioner,” one petition read.

Those examples and more here.  Gat get Markus Mobius.

Instead of a mandate to buy insurance and a penalty of $X if you do not comply, what if everyone’s taxes are raised by $X and anyone who complies with the mandate receives a refund of $X? Does that make it constitutional?

Professional line standers.

The word “Intrepid” is on Hans Scheltema’s business card, and it’s more than just the name of his business. The professional line-stander prides himself on sticking it out, in all kinds of weather, on behalf of the lawyers, lobbyists and others willing to pay for a place in line at big events, such as arguments before the Supreme Court this week on thefederal health-care overhaul.

But even a guy with supreme stick-to-itiveness has his limits.

On Sunday afternoon, after holding down spot No. 3 outside the Supreme Court for the better part of the day, he hired a homeless man to fill in for a few hours. Scheltema, 44, who had taken over Sunday morning for a guy who had held the spot since Friday, wanted to go home to recharge — both himself and his BlackBerry.

Here’s a pretty simple point but one that seems to be getting lost in the “discussion.”

Insurance is plagued by an incentive problem. In an ideal insurance contract the insuree receives, in the event of a loss or unanticipated expense, a payment that equals the full value of that loss. This smooths out risk and improves welfare. The problem is that by eliminating risk the contract also removes the incentive to take actions that would reduce that risk. This lowers welfare.

In order to combat this problem the contracts that are actually offered are second-best: they eliminate some risk but not all. The insured is left exposed to just enough risk so that he has a private incentive to take actions that reduce it. The incentive problem is solved but at the cost of less-than-full insurance.

But building on this idea, there are often other instruments available that can do even better. For example suppose that you can take prophylactic measures (swish!) that are verifiable to the insurance provider. Then at the margin welfare is improved by a contract which increases insurance coverage and subsidizes the prophylaxis.

That is, you give them condoms. For free. As much as they want.

As the director of recruiting for your department you sometimes have to consider Affirmative Action motives.  Indeed you are sympathetic to Affirmative Action yourself and even on your own your recruiting policy would internalize those motives.  But in fact your institution has a policy.  You perceive clear external incentives coming from that policy.

Now this creates a dilemma.  For any activity like this there is some socially optimal level and it combines your own private motivations with any additional external interests.  But the dilemma for you is how these should be combined.  One possibility is that the public motive and your own private interest stem from completely independent reasons.  Then you should just “add together” the weight of the external incentives you feel plus those of your own.  But it could be that what motivates your Dean to institutionalize affirmative action is exactly what motivates you.  In this case he has just codified the incentives you would be responding to anyway,  and rather than adding to them, his external incentives should perfectly crowd out your own.

There is no way of knowing which of these cases, or where in between, the true moral calculation is.  That is a real dilemma, but I want to think of it as a metaphor for the dilemma you face in trying to sort out the competing voices in your own private moral decisions.

Say you have a close friend and you have an opportunity to do something nice for them, say buy them a birthday gift.  You think about how nice your friend has been to you and decide that you should be especially nice back.  But compared to what? Absent that deliberative calculation you would have chosen the default level of generosity.  So what your deliberation has led you to decide is that you should be more generous than the default.

But how do you know?  What exactly determined the default?  One possibility is that the default represents your cumulative wisdom about how nice you should be to other people in general.  Then your reflection on this particular friend’s particular generosity should increment the default by a lot.  But surely that’s not the relevant default.  He’s your friend, he’s not just an arbitrary person (you would even be considering giving a gift to an arbitrary person.)  No doubt your instinctive inclination to be generous to your friend already encodes a lot of the collected memory and past reflection that also went into your most recent conscious deliberation.  And as long as there is any duplication, there should be crowding out. So you optimally moderate the enthusiasm that arises from your conscious calculation.

But how much?  That is a dilemma.

Wealthy kids are usually wealthy because their wealthy parents left them a lot of money.  You might think that’s because parents are altruistic towards their kids.  Indeed every dollar bequeathed is a dollar less of consumption for the parent.  But think about this:  if parents are so generous towards their kids why do they wait until they die to give them all that money?  For a truly altruistic parent, the sooner the gift, the better.  By definition, a parent never lives to see the warm glow of an inheritance.

A better theory of bequests is that they incentivize the children to call, visit, and take care of the parents in their old age.  An inheritance is a carrot that awaits a child who is good to the parent until the very end.  That’s the theory of strategic bequests in Bernheim, Shleiffer and Summers.

But even with that motivation you have to ask why bequests are the best way to motivate kids.  Why not just pay them a piece rate?  Every time they come to visit they get a check.  If the parent is even slightly altruistic this is a better system since the rewards come sooner.

To round out the theory of strategic bequests we need to bring in the compound value of lump-sum incentives.  Suppose you are nearing the bitter end and its likely you are not going to live more than another year.  You want your kids to visit you once a month in your last year and that’s going to cost you 12*c where c is your kid’s opportunity cost per weekly visit.  You could either implement this by piece-rate, paying them c every time they come, or in a lump sum by leaving them 12c in your will if they keep it up the whole time.

But now what happens if, as luck would have it, you actually survive for another year?  With the piece rate you are out 12c and still have to cough up another 12c if you want to see your kids again before you die.  But a bequest can be re-used.  You just restart the incentives, and you get another year’s worth of visits at zero additional cost.

Is it credible?  All you need is to commit to a policy that depends only on their devotion in the last year of your life.  Since you are old your kids know you can’t remember what happened earlier than that anyway so yes, it’s perfectly credible.

(Idea suggested by Mike Whinston.)

If you buy something online using PayPal and upon delivery it turns out to be not what was advertised, PayPal might refund your payment and require you to destroy the item:

I sold an old French violin to a buyer in Canada, and the buyer disputed the label.

This is not uncommon. In the violin market, labels often mean little and there is often disagreement over them. Some of the most expensive violins in the world have disputed labels, but they are works of art nonetheless.

Rather than have the violin returned to me, PayPal made the buyer DESTROY the violinin order to get his money back. They somehow deemed the violin as “counterfeit” even though there is no such thing in the violin world.

PayPal required the buyer to photograph the remains to prove it was destroyed.  (How else can you prove to PayPal that the item wasn’t worth the money?) Glengarry glide:  Eitan Hochster.


You and your partner have to decide on a new venture. Maybe you and your sweetie are deciding on a movie, you and your co-author are deciding on which new idea to develop, or you and your colleague are deciding which new Assistant Professor to hire.

Deliberation consists of proposals and reactions. When you pitch your idea you naturally become attached to it. Its your idea, your creation. Your feelings are going to be hurt if your partner doesn’t like it.

Maybe you really are a dispassionate common interest maximizer, but there’s no way for your partner to know that for sure. You try to say “give me your honest opinion, I promise I have thick skin, you won’t hurt my feelings.” But you would say that even if it’s a little white lie.

The important thing is that no matter how sensitive you actually are, your partner believes that there is a chance your feelings will be hurt if she shoots down your idea. And she might even worry that you would respond by feeling resentful towards her. All of this makes her reluctant to give her honest opinion about your idea. The net result is that some inferior projects might get adopted because concern for hurt feelings gets in the way of honest information exchange.

Unless you design the mechanism to work around that friction. The basic problem is that when you pitch your idea it becomes common knowledge that you are attached to it. From that moment forward it is common knowledge that any opinion expressed about the idea has the chance of causing hurt feelings.

So a better mechanism would change the timing to remove that feature. You and your partner first announce to one another which options are unacceptable to you. Now all of the rejections have been made before knowing which ones you are attached to. Only then do you choose your proposal from the acceptable set.

If your favorite idea has been rejected then for sure you are disappointed. But your feelings are not hurt because it is common knowledge that her rejection is completely independent of your attachment. And for exactly that reason she is perfectly comfortable being honest about which options are unacceptable.

This is going to work better for movies, and new Assistant Professors than it is for research ideas. Because we know in advance the universe of all movies and job market candidates.

Research ideas and other creative ventures are different because there is no way to enumerate all of the possibilities beforehand and reject the unacceptable ones. Indeed the real value of a collaborative relationship is that the partners are bringing to the table brand new previously unconceived-of ideas. This makes for a far more delicate relationship.

We can thus classify relationships according to whether they are movie-like or idea-like, and we would expect that the first category are easier to sustain with second-best mechanisms whereas the second require real trust and honesty.

(inspired by a conversation with +Emil Temnyalov and Jorge Lemus)

(Regular readers of this blog will know I consider that a good thing.)

Market mechanisms of all sorts are plagued in practice by the problem of unraveling.  For example, well before completing law school, law students sign contracts to assume positions at established law firms.  Unraveling occurs when this early contracting motive causes market participants to compete by exiting the market earlier and earlier to the detriment of market efficiency.  An excellent summary of the problem and a slew of examples can be found in a paper by Roth and Xing.

One of the problems is that the formal market institutions were not designed to combat unraveling.  The adoption of stable matching mechanisms is often proposed as a solution.  A famous example is the National Medical Resident Matching program which matches residents to hospitals, a stable matching mechanism that is widely believed to have significantly curtailed unraveling in that market.

Nevertheless unraveling is a robust phenomenon and Songzi Du, in a joint paper with Yair Livne, from Stanford GSB has a very simple theoretical explanation.  Indeed, he shows that unraveling incentives are strong even in markets with a stable matching mechanism.  Moreover large market size seems only to make the problem worse.

Consider an employer and employee who are both highly ranked and suppose that they meet each other well before the matching process begins so that neither has learned anything about the quality of the rest of the market.  Let’s analyze their incentives to sign a contract now and exit the market before the formal matching process takes place.

The employee reasons that the mechanism is either going to give him a better match or its going to give the employer a better match. If he, the employee, gets a better match it is not likely to be that much better since the current employer is already highly ranked.  On the other hand, if the employer finds a better match then the employee is going to have to take his chances with the rest of the market.  Since the current employer is highly ranked, it is likely that whatever new employer he is matched with will be significantly worse.

On average going to the matching mechanism is a bad gamble.  And since the employer is in the exact same situation, they both prefer to exit than to take that gamble.

Du and Livne use this idea to quantify how large a problem unraveling is likely to be.  They takes the realistic position that participants are going to learn about the quality of close competitors prior to contracting. This gives them a rough sense of the possible matches they will get from the mechanism.  The previous intution translates naturally to this setting.  If the two potential early contractors are near the high end of this group, they will want to match early.  Du and Livne show that for any given similarly ranked pair, this will happen about 1/4 of the time, and this is true even when the market is very large.

Finally, once it is established that unraveling is the norm and not the exception, he uses a dynamic model to give a sense of what kind of equilibrium an unraveled market settles into.  And the news here is not good either.  No matter how you try to make the match work, by assigning some to match early and some to wait, there will always be some pairs that want to deviate from that plan.  That is, there is no equilibrium.

I was talking to someone about matching mechanisms and the fact that strategy-proof incentives are often incompatible with efficiency.  The question came up as to why we insist upon strategy-proofness, i.e. dominant strategy incentives as a constraint.  If there is a trade-off between incentives and efficiency shouldn’t that tradeoff be in the objective function?  We could then talk about how much we are willing to compromise on incentives in order to get some marginal improvement in efficiency.

For example, we might think that agents are willing to tell the truth about their preferences as long as manipulating the mechanism doesn’t improve their utility by a large amount.  Then we should formalize a tradeoff between the epsilon slack in incentives and the welfare of the mechanism.  The usual method of maximizing welfare subject to an incentive constraint is flawed because it prevents us from thinking about the problem in this way.

That sounded sensible until I thought about it just a little bit longer.  If you are a social planner you have some welfare function, let’s say V.  You want to choose a mechanism so that the resulting outcome maximizes V.  And you have a theory about how agents will play any mechanism you choose.  Let’s say that for any mechanism M, O(M) describes the outcome or possible outcomes according to your theory.  This can be very general:  O(M) could be the set of outcomes that will occur when agents are epsilon-truth-tellers, it could be some probability distribution over outcomes reflecting that you acknowledge that your theory is not very precise.  And if you have the idea that incentives are flexible, O can capture that:  for mechanisms M that have very strong incentive properties, O(M) will be a small set, or a degenerate probability distribution, whereas for mechanisms M that compromise a bit on incentives O(M) will be a larger set or a more diffuse probability distribution.  And if you believe in a tradeoff between welfare and incentives, your V applied to O(M) can encode that by quantifying the loss associated with larger sets O(M) compared to smaller sets O(M).

But whatever your theory is you can represent it by some O(.) function.  Then the simplest formulation of your problem is:  choose M to maximize V(O(M)). And then we can equivalently express that problem in our standard way: choose an outcome (or set of outcomes, or probability distribution over outcomes ) O to maximize V(O) subject to the constraint that there exists some mechanism M for which O = O(M).  That constraint is called the incentive constraint.

Incentives appear as a constraint, not in the objective.  Once you have decided on your theory O, it makes no sense to talk about compromising on incentives and there is no meaningful tradeoff between incentives and welfare.  While we might, as a purely theoretical exercise, comment on the necessity of such a tradeoff, no social planner would ever care to plot a “frontier” of mechanisms whose slope quantifies a rate of substitution between incentives and welfare.

“Corporations are evil” and we know this because they are always doing malicious things that are only later exposed. This often involves exploiting the complexity of transactions and the inability or unwillingness of consumers to wade through the thicket by surreptitiously ripping people off.  For example, unauthorized charges inserted into phone bills, in a practice known as “cramming”, cost Americans $2 billion dollars a year, according to this article.

When something like this is discovered, the automatic reaction is to assume that the malice was intentional. They were sticking those charges in there to squeeze money out of consumers. And its basic economics that if they can secretly insert charges and make money they will. On the other hand, such a theory would appear to require you to accept they hypothesis that “corporations are evil” or at least they are cold-hearted profit maximizers.

But you can believe that corporations are not intentionally malicious and still assume that whenever there is a cold-hearted way to steal money they will do it.    Because many malicious practices are not actively designed, rather they creep in and they are passively allowed to persist.

For example, those charges could have been legitimate under an outdated policy and when the policy was changed they forgot to remove them. Or some bumbling technician could have accidentally inserted them. Modern transactions are so complicated that random “mutations” are going to appear without any malicious intent and indeed without anyone noticing. This is a far more likely explanation than someone purposefully sticking them in there, especially if you doubt that “corporations are evil.”

Indeed, to have a conscious policy of ripping off unsuspecting customers requires instructing somebody to do that, and leaving a paper trail. Even a truly evil corporation understands that this is the wrong way to do it. The right way to do it is to structure the organization in a way that facilitates malice creep.

You don’t have to instruct anybody to allow mutant ripoffs to appear. They appear on their own, no paper trail required. All you need to do is to give weak incentives to the officers you have charged with making sure that you are not ripping anybody off.  Nobody in your organization will have any knowledge of all the ways you are cheating your clients, not even you. By design.

There is an art to the design of an organization that cultivates malice creep. Because at the same time you have to stop “virtue creep” in its tracks. You don’t want unintended credits to randomly get inserted into the phone bill. What you need is a one-sided monitoring program. You wait around for lots of mutations to appear, you know that some are virtuous and some are malicious. Now getting rid of the virtuous ones and keep the malicious ones is easily done, just announce that its time to do some “cost-cutting.” Form an ad hoc task force to go through and find ways to restructure billing in ways that save the company money. They’ll just look at the credits and ignore the charges.

In terms of the long-run bottom line, Darwinism and Lamarckism are almost indistinguishable, but Occam’s razor favors Darwin.  I would argue by the same principles that most of the malicious practices of organizations emerge by cultivated accident rather than by design.


This is from an article in the New York Times.

When the taxi baron Robert Scull sold part of his art collection in a 1973 auction that helped inaugurate today’s money-soused contemporary-art market, several artists watched the proceedings from a standing-room-only section in the back. There, Robert Rauschenberg saw his 1958 painting “Thaw,” originally sold to Scull for $900, bring down the gavel at $85,000. At the end of the Sotheby Parke Bernet sale in New York, Rauschenberg shoved Scull and yelled that he didn’t work so hard “just for you to make that profit.”

The uproar that followed in part inspired the California Resale Royalties Act, requiring anyone reselling a piece of fine art who lives in the state, or who sells the art there for $1,000 or more, to pay the artist 5 percent of the resale price.

Spake Kim:

“I got caught up with the hoopla and the filming of the TV show that when I probably should have ended my relationship, I didn’t know how to and didn’t want to disappoint a lot of people,” the post said.

That explains it.

The roses in your garden are dead and your gardener tells you that there are bugs that have to be killed if you want the next generation of roses to survive.  So you pay him to plant new roses and spray poison to keep the bugs away.

Each week he comes back and tells you that the bugs are still threatening to kill the roses and you will need to pay him again to spray the poison to keep them away.  This goes on and on.  At what point do you stop paying him to spray poison on your roses?

Keep in mind that if there really are bugs waiting to take over once the poison is gone, you are going to lose your roses if you stop spraying.  So you are taking a big risk if you stop.  On the other hand, only he really knows for sure if the bugs are threatening, you are just taking his word for it.

Now add to that the possibility that the poison is not guaranteed.  You may have an infestation even in a week where he sprays.  Of course this only happens if the bugs are a threat.  If you spray for many weeks and you see no infestation this is a pretty good sign that the bugs are not a threat at all.

If you do stop spraying at some point, on what basis do you make that decision?  Assuming he is spraying vigilantly you would optimally stop after many weeks of no infestation.  You would continue for sure if one week the bugs return even though he was spraying.

But you don’t know for sure that he is actually spraying.  You are paying him to do it, but you are taking his word for it that he is actually spraying.  If you assume that he is doing his job and spraying vigilantly, and you therefore follow the decision rule above, and if we wants to keep his job then he won’t be spraying vigilantly after all.

So what do you do?

Al Roth starts a list of comparative advantages of the new electronic parking meters relative to old school coin-fed.

In Brookline, where I live, one can already begin to catalog some of the relative advantages and disadvantages of the old and new technologies, aside from those mentioned above, regarding credit cards in particular.

Waiting time and queues: old meters took your quarters immediately (if they were working well enough to take them at all); new meters take some time even if you are first in line, and since they serve multiple spots, you may have to wait while they take that time for the people ahead of you.

Parking at 7:45am: old meters made you start paying even if you rolled up to the curb before payment was required; new meters know that you don’t have to pay until e.g. 8am, and so can sell you parking until 8:30 without charging you for the first 15 minutes until 8.

Adding time to the meter: old meters let you add another quarter to add time, e.g. if you glanced in at the coffee shop after you had already put money in the meter and noticed that there were no vacant tables, so you would have to go across the street, and wouldn’t be back by 8:30.  New meters print a receipt for you to put on your dashboard, and don’t let you add time to the end of the time interval you have already bought.

To which I would add:  No Free Riding.  There is no more hope of rolling into a space with time still left on the meter from the last guy.

And a spinoff list of disadvantages of both systems.  Pre-payment.  The meter forces me to bear the risk associated with my own uncertain parking duration.  I pay in advance and hope I don’t pay too much or too little.  If I pay ex post I am insured against that risk and I am willing to pay a higher per-minute rate.  (What is the effect on my incentive to park for longer?  With ex-post payment I bear a constant cost per minute I stay.  With pre-payment that incremental cost is zero up until the meter expires and from there increases with the probability that the meter maid turns up.)

If you think about pain as an incentive mechanism to stop you from hurting yourself there are some properties that would follow from that.

When I was pierced by a stingray, the pain was outrageous. The puncture went deep into my foot and that of course hurts but the real pain came from the venom-laden sheath that is left behind when the barb is removed. Funny thing about the venom is that it is protein based and it can be neutralized by denaturing the protein, essentially changing its structure by “cooking” it as you would a raw egg.

How do you cook the venom when it is inside your foot? You don’t pee on it unless you are making a joke on a sitcom (and that’s a jellyfish anyway.) What you do is plunge your foot is scalding hot water raising the internal temperature enough to denature the venom inside. Here’s what happens when you do that. Immediately you feel dramatic relief from the pain. But not long after that you begin to notice that your foot is submerged in scalding hot water and that is bloody painful.

So you take it out. Then you feel the nerve-numbing pain from the venom return to the fore. Back in. Relief, burning hot water, back out. Etc. Over and over again until you have cooked all the venom and you are done. In all about 4 hours of soaking.

A good incentive scheme is reference-dependent. There’s no absolute zero. Zero is whatever baseline you are currently at and rewards/penalties incentivize improvement relative to the baseline. When the venom was the most dangerous thing, the scalding hot water was painless. Once the danger from the venom was reduced, the hot water became the focus of pain. And back and forth.

Second Observation.  After three weeks of surfing (minus a couple of days robbed by my stingray friend) I came away with a sore shoulder.  Rotator cuff injuries are common among surfers, especially over the hill surfers who don’t exercise enough the other 11 months of the year.  The interesting thing about a rotator cuff injury is that the pain is felt in the upper shoulder, not at the site of the injury which is more in the area of the shoulder blade.  It’s referred pain.

In a moral hazard framework the principal decides which signals to use to trigger rewards and penalties.  Direct signals of success or failure are not necessarily the optimal ones to use because success and failure can happen by accident too.  The optimal signal is the one that is most informative that the agent took the appropriate effort.  Referred pain must be based on a similar principle.  Rotator cuff injuries occur because of poor alignment in the shoulder resulting in an inefficient mix of muscles doing the work.  Even though its the rotator cuff that is injured, the use of the upper shoulder is a strong signal that you are going to worsen the injury.  It may be optimal to penalize that directly rather than associate the pain with the underlying injury.

(Drawing:  Scale Up Machine Fail, from

Jeff’s Twitter Feed

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,055 other followers