You are currently browsing the tag archive for the ‘incentives’ tag.

Registration for the 2012 Allied Social Sciences Meetings has just opened up today. The ASSA meeting is the annual “winter meeting” in which hordes of economists descend on a rotating list of cities to spend a weekend shuffling papers around and stiffing cab drivers.

It was by sheer luck that last night I noticed that today would be the first day to register. And so this morning I was one of the first to login to the ASSA hotel registration system and reserve one of the better suites (we will be in the Fairmont) in one of the more central conference hotels where Northwestern Economics will conduct its job market interviews.  (New PhD recruitment is one of the main, perhaps the main, activity at the ASSA meetings.) Had I been just a few hours later we would have been relegated to a remote hotel making it harder for interviewers and interviewees to get to and from the interviews.

(If that happened to you, you can follow @ASSAMeeting on Twitter to wait for announcements of new suites opening up.  But wouldn’t you rather follow me? Sandeep?)

It’s funny that a conference run by economists uses a qeueing/rationing system to allocate scarce hotel space.  The system doesn’t even allow ex post exchanges between departments which would undo inefficient misallocations. If MIT gets stuck in the Embassy Suites and Podunk U is in the Hyatt Regency, then tough luck MIT (maybe Podunk can build a stronger theory group, ha ha ha.)

The problem is that ASSA negotiates discounted rates for the suites by reserving them in bulk.  Obviously that is good for everyone.  But the discounted rate is below market clearing and therefore there  will be excess demand for the best hotels.  It would seem that the resulting inefficiency is the price we have to pay for our monopsony power.

Indeed it would not work to have ASSA negotiate hotel space at discount rates and then turn around and use an efficient auction internally to allocate it.  The reason is somewhat subtle.  Here’s one way of seeing it.  An efficient auction for a single suite is (essentially) a second-price auction.  It works efficiently becuase when I know that I will have to pay the second highest bid then I will bid exactly my willingness to pay.  Therefore the winner will be the bidder who values the suite the most.  However, because ASSA bought the suite at a rate that is below market clearing, the second highest bid for a suite is going to be more than ASSA paid for it.

That means ASSA makes money.  Sounds good right?  No.  In fact it is a problem precisely because we all benefit from ASSA making money (we get lower registration fees, lower journal subscription fees, etc.) You see I internalize the benefits of ASSA’s revenue which essentially means that I get back some of the price I pay when I win the auction. In other words I am not really paying the second highest bid, I am paying something less.  And because of that I no longer want to bid my true willingness to pay, and the mechanism breaks down.  In the jargon, the efficient auction is no longer incentive compatible.

But there is a solution.  The basic mistake ASSA is making is to negotiate discounted suites.  Why does it do that?  Well, it has monopsony power and it has different hotels in the area compete with one another for the business. Since we are buying hotel space it seems natural to make hotel discounts the currency of that competition.  But we saw the problem with that.  Instead ASSA should ask for lump sum cash bids from the hotels.  The highest bidding hotel gets the right to auction off their suites to ASSA members using an efficient auction. The hotel keeps all revenues from the auction.

That way I don’t internalize any of the revenues from the auction.  The mechanism is incentive compatible again and therefore gives an efficient allocation.  The hotels make some money.  And the amount of money they can expect to earn is exactly how much they are willing to pay to ASSA in advance for that right.  So in fact ASSA comes away with their monopsony rents without having to sacrifice efficiency.

The efficient way to allocate scarce capacity on a flight is to hold an auction as close to departure time as possible. Allocating space prior to that point runs the risk that a ticketed passenger learns that his willingness to pay is lower than he expected (business meeting is cancelled, family member falls ill, etc.)  Allocating space close to the time of departure ensures that passengers have resolved any uncertainty about their willingness to pay and those with the highest willingness to pay will be seated.

But with an auction the airline cedes a lot of consumer surplus to the passengers, because in an efficient auction the winner pays not his own willingness to pay, but the willingness to pay of the marginal bidder. The airline is willing to sacrifice efficient allocation in exchange for a mechanism that extracts more of the gains from trade.

The ideal for the airline would be to sell tickets to the auction and to put these tickets up for sale as early as possible, before passengers have any private information about their willingness to pay.

Here’s an extreme example to illustrate. There is a plane with one seat and two potential passengers. By the time of departure they will know their willingness to pay, but when they first enter this world they know only the probability distribution. The airline should announce that there will be a 2nd price auction at the time of departure but in order to be allowed to participate in that auction the passengers must purchase a ticket the moment they are born. The price of this ticket will be set equal to the expected value of consumer surplus from the auction. This way the airline achieves the maximal gains from trade and secures all the rents for itself.

Obviously the problem is that the airline cannot contract with every potential passenger still in the bassinet. Indeed contracting is initiated by the passenger, not the airline. Thus, in order to be able to extract consumer surplus the airline’s mechanism has to give the passengers the incentive to voluntarily contract early prior to the resolution of uncertainty.

A mechanism that accomplishes this will have two features. First, ticket prices must rise as the departure date approaches. This incentivizes early purchases. Second, flights will be oversold. This enables efficient re-allocation of seats on the basis of information realized after tickets are purchased. In particular, those with lowest realized willingness to pay will sell back their tickets in a reverse auction.

(This is ongoing research with Daniel Garrett and Toomas Hinnosaar, two NU students who will be on the job market this year.)

Let’s say I want to know how many students in my class are cheating on exams. Maybe I’d like to know who the individual cheaters are, maybe I don’t but let’s say that the only way I can find out the number of cheaters is to ask the students themselves to report whether or not they cheated.  I have a problem because no matter how hard I try to convince them otherwise, they will assume that a confession will get them in trouble.

Since I cannot persuade them of my incentives, instead I need to convince them that it would be impossible for me to use their confession as evidence against them even if I wanted to.  But these two requirements are contradictory:

1. The students tell the truth.
2. A confession is not proof of their guilt.

So I have to abandon one of them.  That’s when you notice that I don’t really need every student to tell the truth.  Since I just want the aggregate cheating rate, I can live with false responses as long as I can use the response data to infer the underlying cheating rate.  If the students randomize whether they tell me the truth or lie, then a confession is not proof that they cheated.  And if I know the probabilities with which they tell the truth or lie, then with a large sample I can infer the aggregate cheating rate.

That’s a trick I learned about from this article.  (Glengarry glide: John Chilton.)  The article describes a survey designed to find out how many South African farmers illegally poached leopards.  The farmers were given a six-sided die and told to privately roll the die before responding to the question.  They were instructed that if the die came up a 1 they should say yes that they killed leopards.  If it came up a 6 they should say that they did not.  And if a 2-5 appears they should tell the truth.

A farmer who rolls a 2-5 can safely tell the researcher that he killed leopards because his confession is indistinguishable from a case in which he rolled a 1 and was just following instructions.  It is statistical evidence against him at worst, probably not admissible in court.  And assuming the farmers followed instructions, those who killed leopards will say so with probability 5/6 and those who did not will say so with probability 1/6.  In a large sample, the fraction of confessions will be a weighted average of those two numbers with the weights telling you the desired aggregate statistic.

A new paper by Edwards and Ogilvie challenges the stylized facts that motivate MNW.  They claim:

The policies of the counts of Champagne played a major role in the rise of the fairs. The counts had an interest in ensuring the success of the fairs, which brought in very
significant revenues. These revenues in turn enabled the counts to consolidate their political position by rewarding allies and attracting powerful vassals….

The first institutional service provided by the counts of Champagne consisted of mechanisms for ensuring security of the persons and property rights of traders. The counts undertook early, focused and comprehensive action to ensure the safety of merchants travelling to and from the fairs..

A second institutional service provided by the rulers of Champagne was contract enforcement. The counts of Champagne operated a four-tiered system of public lawcourts which judged lawsuits and officially witnessed contracts with a view to subsequent enforcement…

A final reason for the success of the Champagne fair-cycle was that it offered an almost continuous market for merchandise and financial services throughout the year, like a great trading city, but without the most severe disadvantage of medieval cities – special privileges for locals that discriminated against foreign merchants

The paper is an interesting read and there are lots of rich details about the Champagne fairs themselves.

Usain Bolt was disqualified in the final of the 100 meters at the World Championships due to a false start.  Under current rules, in place since January 2010, a single false start results in disqualification.  By contrast, prior to 2003 each racer who jumped the gun would be given a warning and then disqualified after a second false start.  In 2003 the rules were changed so that the entire field would receive a warning after a false start by any racer and all subsequent false starts would lead to disqualification.

Let’s start with the premise that an indispensible requirement of sprint competition is that all racers must start simultaneously.  That is, a sprint is not a time trial but a head-to-head competition in which each competitor can assess his standing at any instant by comparing his and his competitors’ distance to a fixed finished line.

Then there must be penalty for a false start.   The question is how to design that penalty.  Our presumed edict rules out marginally penalizing the pre-empter by adding to his time, so there’s not much else to consider other than disqualification. An implicit presumption in the pre-2010 rules was that accidental false starts are inevitable and there is a trade-off between the incentive effects of disqualification and the social loss of disqualifying a racer who made an error despite competing in good faith.

(Indeed this trade-off is especially acute in high-level competitions where the definition of a false start is any racer who leaves less than 0.10 seconds after the report of the gun.  It is assumed to be impossible to react that fast. But now we have a continuous variable to play with.  How much more impossible is it to react within .10 seconds than to react within .11 seconds? When you admit that there is a probability p>0, increasing in the threshold, that a racer is gifted enough to reach within that threshold, the optimal incentive mechanisn picks the threshold that balances type I and type II errors.  The maximum penalty is exacted when the threshold is violated.)

Any system involving warnings invites racers to try and anticipate the gun, increasing the number of false starts. But the pre- and post-2003 rules play out differently when you think strategically.  Think of the costs and benefits of trying to get a slightly faster start.  The warning means that the costs of a potential false start are reduced. Instead of being disqualified you are given a second chance but are placed in the dangerous position of being disqualified if you false start again.  In that sense, your private incentives to time the gun are identical whether the warning applies only to you or to the entire field.  But the difference lies in your treatment relative to the rest of the field.  In the post-2003 system that penalty will be applied to all racers so your false start does not place you at a disadvantage.

Thus, both systems encourage quick starts but the post 2003 system encouraged them even more. Indeed there is an equilibrium in which false starts occur with probability close to 1, and after that all racers are warned. (Everyone expects everyone else to be going early, so there’s little loss from going early yourself. You’ll be subject to the warning either way.) After that ceremonial false start the race becomes identical to the current, post 2010, rule in which a single false start leads to disqualification.  My reading is that equilibrium did indeed obtain and this was the reason for the rule change.  You could argue that the pre 2003 system was even worse because it led to a random number of false starts and so racers had to train for two types of competition:  one in which quick starts were a relevant strategy and one in which they were not.

Is there any better system?  Here’s a suggestion.  Go back to the 2003-2009 system with a single warning for the entire field.  The problem with that system was that the penalty for being the first to false start was so low that when you expected everyone else to be timing the gun your best response was to time the gun as well.  So my proposal is to modify that system slightly to mitigate this problem. Now, if racer B is the first to false start then in the restart if there is a second false start by, say racer C, then racer C and racer B are disqualified.  (In subsequent restarts you can either clear the warning and start from scratch or keep the warning in place for all racers.)

Here’s a second suggestion.  The racers start by pushing off the blocks.  Engineer the blocks so that they slide freely along their tracks and only become fixed in place at the precise moment that the gun is fired.

(For the vapor mill,  here are empirical predictions about the effect of previous rule-regimes on race outcomes:

1. Comparing pre-2003, under the 2003-2009 you should see more races with at least one false start but far fewer total false starts per race.  The current rules should have the least false starts.
2. Controlling for trend (people get faster over time) if you consider races where there was no false start, race times should be faster 2003-2009 than pre-2003.   That ranking reverses when you consider races in which there was at least one false start. Controlling for Usain Bolt, times should be unambiguously slower under current rules.)

I went out for a run and left some instructions for my daughter.

By the way, running is the suckiest form of exercise there is.  The only thing worse than my jog up and down the street is running on a treadmill, if only for the change of scenery.  Very slow change of scenery.  But I will admit that the boredom involved adds a dimension that you don’t get from actual, useful exercise like playing sports.  I can run around on a tennis court for hours but I am embarrassed to tell you that after about a year of regular running I can’t comfortably run more than a mile.  There being no assistance whatsoever from competitive spirit or just plain old enjoyment, running is a pure exercise of the will to prolong immediate suffering and boredom in return for some abstract, delayed benefit.

And that mile takes me more than 10 minutes.  I think.  I am too ashamed to time myself.

But nevertheless not so long as to make me feel uncomfortable leaving my 10 year old at home for the duration (I actually don’t know what the law is, I hope I am not incriminating myself.)  And she had an assignment that she needed to finish so I suggested that she work on it while I was out.

Now there were also some other things that needed to be done.   And you never know what’s going to happen when she sits down to do her assignement.  Does she have all the stuff she needs, is she going to need some help? etc.  So ideally I would give her a contingency plan.  If for whatever reason you can’t do the assignment, do the other thing in the meantime.

But this is not always a good idea.  Just mentioning the contingency turns a clearly defined instruction into one which invites subjective interpretation, and wiggle room at the margin of acceptable contingincies.  “You said I should do the other thing so I did.”

Of course there is a tradeoff.  First of all, there’s the basic second-best trade-off. Without a plan B, when it turns out to be truly impossible to do plan A, you come home to find her on plan Wii.

But more importantly, she’s gotta learn how to judge the contingencies on her own, eventually.  The thing is, rightly or wrongly I think parents instinctively believe that in the early stages of that process kids read a lot, indeed too much, into the items put into the menu of options.  There is an excessive distinction between an unmentioned, and hence implicitly disallowed option and one which is mentioned but discouraged.

Unlearning that kind of inference, clearly a necessary step in the long run, can be tricky in the short run.

Heh, short run.

He is expecting regular raises.  Not every month, maybe not even every year but he expects a raise and he has his own timetable for when you should give it to him. No matter how hard you try to keep to a fair schedule of raises, uncertainty about his expectations together with other random factors mean that at some point you are going to fall behind.

As time passes and no raise he is going to start slacking off.  Maybe just a little bit at first but it’s going to be noticeable.  Now from your perspective it just looks like he is not working as hard as when you first hired him.  You tell yourself stories about how gardeners start out by working hard to get your business and then slack off over time.  You might even consider that maybe he is slacking off because you aren’t giving him a raise but what are you going to do now?  You can’t possibly give him a raise and reward him for slacking off.  If anything your raise is going to come even later now.

And so he slacks off even more.  In fact he has been through this before so the very first slack-off was a big drop because he knew it was the beginning of the end. He’s gonna be fired pretty soon.

There is typically a fine for parking your car on the street facing the wrong direction, i.e. against traffic.  What is the harm in that?

Economic theory suggests that penalties should be attached to behaviors that are correlated with crime and not necessarily to criminal behavior itself.  For example, price fixing may be impossible to detect, but conspiracy to fix prices may be much easier.  It makes sense to make cheap talk a crime even though the talk itself causes no harm.

When you car is parked facing the wrong way its a sure sign that A) you previously committed the crime of driving the wrong way and B) you will soon do it again.

Clearly the reason that sex is so pleasurable is because that motivates us to have a lot of it.  It is evolutionarily advantageous to desire the things that make us more fit. Sex feels good, we seek that feeling, we have a lot of sex, we reproduce more.

But that is not the only way to get motivated.  It is also advantageous to derive pleasure directly from having children.  We see children, we sense the joy we would derive from our own children and we are motivated to do what’s necessary to produce them, even if we had no particular desire for the intermediate act of sex.

And certainly both sources of motivation operate on us, but in different proportions. So it is interesting to ask what determines the optimal mix of these incentives. One alternative is to reward an intermediate act which has no direct effect on fitness but can, subject to idiosyncratic conditions together with randomness, produce a successful outcome which directly increases fitness.   Sex is such an act. The other alternative is to confer rewards upon a successful outcome (or penalties for a failure.)  That would mean programming us with a desire and love for children.

The tradeoff can be understood using standard intuitions from incentive theory. The rewards are designed to motivate us to take the right action at the right time. The drawback of rewarding only the final outcome is that it may be too noisy a signal of whether he acted.  For example, not every encounter results in offspring. If so, then a more efficient use of rewards to motivate an act of sex is to make sex directly pleasurable. But the drawback of rewarding sex directly is that whether it is desirable to have sex right now depends on how likely it is to produce valuable offspring.  If we are made to care only about the (value of) offspring we are more likely to make the right decision under the right circumstances.

Now these balance out differently for males than for females. Because when the female becomes pregnant and gives birth that is a very strong signal that she had sex at an opportune time but conveys noisier information about him.That is because, of course, this child could belong to any one of her (potentially numerous) mates. Instilling a love for children is therefore a relatively more effective incentive instrument for her than for him.

As for love of sex, note that the evolutionary value of offspring is different for males than for females because females have a significant opportunity cost given that they get pregnant with one mate at a time. This means that the circumstances are nearly always right for males to have sex, but much more rarely so for females. It is therefore efficient for males to derive greater pleasure from sex.

(It is a testament to my steadfastness as a theorist that I stand firmly by the logic of this argument despite the fact that, at least in my personal experience, females derive immense pleasure from sex.)

Drawing:  Misread Trajectory from www.f1me.net

That was the title of a very interesting talk at the Biology and Economics conference I attended over the weekend at USC.  The authors are Juan Carillo, Isabelle Brocas and Ricardo Alonso.  It’s basically a model of how multitasking is accomplished when different modules in the brain are responsible for specialized tasks and those modules require scarce resources like oxygen in order to do their job.  (I cannot find a copy of the paper online.)

The brain is modeled as a kludgy organization.  Imagine that the listening-to-your-wife division and the watching-the-French-Open division of YourBrainINC operate independently of one another and care about nothing but completing their individual tasks.  What happens when both tasks are presented at the same time? In the model there is a central administrator in charge of deciding how to ration energy between the two divisions.  What makes this non-trivial is that only the individual divisions know how much juice they are going to need based on the level of difficulty of this particular instance of the task.

Here’s the key perspective of the model.  It is assumed that the divisions are greedy:  they want all the resources they need to accomplish their task and only the central administrator internalizes the tradeoffs across the two tasks.  This friction imposes limits on efficient resource allocation.  And these limits can be understood via a mechanism design problem which is novel in that there are no monetary transfers available.  (If only the brain had currency.)

The optimal scheme has a quota structure which has some rigidity.  There is a cap on the amount of resources a given division can utilize and that cap is determined solely by the needs of the other division.  (This is a familiar theme from economic incentive mechanisms.)  An implication is that there is too little flexibility in re-allocating resources to difficult tasks.  Holding fixed the difficulty of task A, as the difficulty of task B increases, eventually the cap binds.  The easy task is still accomplished perfectly but errors start to creep in on the difficult task.

(Drawing:  Our team is non-hierarchical from www.f1me.net)

Apparently it’s biology and economics week for me because after Andrew Caplin finishes his fantastic series of lectures here at NU tomorrow, I am off to LA for this conference at USC on Biology, Neuroscience, and Economic Modeling.

Today Andrew was talking about the empirical foundations of dopamine as a reward system.  Along the way he reminded us of an important finding about how dopamine actually works in the brain.  It’s not what you would have guessed.  If you take a monkey and do a Pavlovian experiment where you ring a bell and then later give him some goodies, the dopamine neurons fire not when the actual payoff comes, but instead when the bell rings.  Interestingly, when you ring the bell and then don’t come through with the goods there is a dip in dopamine activity that seems to be associated with the letdown.

The theory is that dopamine responds to changes in expectations about payoffs, and not directly to the realization of those payoffs.  This raises a very interesting theoretical question:  why would that be Nature’s most convenient way to incentivize us?  Think of Nature as the principal, you are the agent.  You have decision-making authority because you know what choices are available and Nature gives you dopamine bonuses to guide you to good decisions.  Can you come up with the right set of constraints on this moral hazard problem under which the optimal contract uses immediate rewards for the expectation of a good outcome rather than rewards that come later when the outcome actually obtains?

Here’s my lame first try, based on discount factors.  Depending on your idiosyncratic circumstances your survival probability fluctuates, and this changes how much you discount the expectation of future rewards.  Evolution can’t react to these changes.  But if Nature is going to use future rewards to motivate your behavior today she is going to have to calibrate the magnitude of those incentive payments to your discount factor.  The fluctuations in your discount factor make this prone to error. Immediate payments are better because they don’t require Nature to make any guesses about discounting.

Andrew Caplin is visiting Northwestern this week to give a series of lectures on psychology and economics.  Today he talked about some of his early work and briefly mentioned an intriguing paper that he wrote with Kfir Eliaz.

Too few people get themselves tested for HIV infection.  Probably this is because the anxiety that would accompany the bad news overwhelms the incentive to get tested in the hopes of getting the good news (and also the benefit of acting on whatever news comes out.)  For many people, if they have HIV they would much rather not know it.

How do you encourage testing when fear is the barrier?  Caplin and Eliaz offer one surprisingly simple, yet surely controversial possibility:  make the tests less informative.  But not just any old way.  Because we want to maintain the carrot of a positive result but minimize the deterrent of a negative result.  Now we could try outright deception by certifying everyone who tests negative but give no information to those who test positive.  But that won’t fool people for long.  Anyone who is not certified will know he is positive and we are back to the anxiety deterrent.

But even when we are bound by the constraint that subjects will not be fooled there is a lot of freedom to manipulate the informativeness of the test.  Here’s how to ramp down the deterrent effect of bad result without losing much of the incentive effects of a good result.  A patient who is tested will receive one of two outcomes:  a certification that he is negative or an inconclusive result.  The key idea is that when the patient is negative the test will be designed to produce an inconclusive result with positive probability p.  (This could be achieved by actually degrading the quality of the test or just withholding the result with positive probability.)

Now a patient who receives an inconclusive result won’t be fooled.  He will become more pessimistic, that is inevitable.  But only slightly more pessimistic.  The larger we choose p (the key policy instrument) the less scary is an inconclusive result.  And no matter what p is, a certification that the patient is HIV-negative is a 100% certification.  There is a tradeoff that arises, of course, and that is that high p means that we get the good news less often.  But it should be clear that some p, often strictly between 0 and 1, would be optimal in the sense of maximizing testing and minimizing infection.

In the New Yorker, Lawrence Wright discusses a meeting with Hamid Gul, the former head of the Pakistani secret service I.S.I. In his time as head, Gul channeled the bulk of American aid in a particular direction:

I asked Gul why, during the Afghan jihad, he had favored Gulbuddin Hekmatyar, one of the seven warlords who had been designated to receive American assistance in the fight against the Soviets. Hekmatyar was the most brutal member of the group, but, crucially, he was a Pashtun, like Gul.

But

Gul offered a more principled rationale for his choice: “I went to each of the seven, you see, and I asked them, ‘I know you are the strongest, but who is No. 2?’ ” He formed a tight, smug smile. “They all said Hekmatyar.”

Gul’s mechanism is something like the following: Each player is allowed to cast a vote for everyone but himself.  The warlord who gets the most votes gets a disproportionate amount of U.S. aid.

By not allowing a warlord to vote for himself, Gul eliminates the warlord’s obvious incentive to push his own candidacy to extract U.S. aid. Such a mechanism would yield no information.  With this strategy unavailable, each player must decide how to cast a vote for the others.  Voting mechanisms have multiple equilibria but let us look at a “natural” one where a player conditions on the event that his vote is decisive (i.e. his vote can send the collective decision one way or the other).   In this scenario, each player must decide how the allocation of U.S. aid to the player he votes for feeds back to him.  Therefore, he will vote for the player who will use the money to take an action that most helps him, the voter.  If fighting Soviets is such an action, he will vote for the strongest player.  If instead he is worried that the money will be used to buy weapons and soldiers to attack other warlords, he will vote for the weakest warlord.

So, Gul’s mechanism does aggregate information in some circumstances even if, as Wright intimates, Gul is simply supporting a fellow Pashtun.

Here is a problem at has been in the back of my mind for a long time.  What is the second best dominant-strategy mechanism (DSIC) in a market setting?

For some background, start with the bilateral trade problem of Myerson-Satterthwaite.  We know that among all DSIC, budget-balanced mechanisms the most efficient is a fixed-price mechanism.  That is, a price is fixed ex ante and the buyer and seller simply announce whether they are willing to trade at that price.  Trade occurs if and only if both are willing and if so the buyer pays the fixed price to the seller. This is Hagerty and Rogerson.

Now suppose there are two buyers and two sellers.  How would a fixed-price mechanism work?   We fix a price p.   Buyers announce their values and sellers announce their costs.  We first see if there are any trades that can be made at the fixed price p.  If both buyers have values above p and both sellers have values below then both units trade at price p.  If two buyers have values above p and only one seller has value below p then one unit will be sold: the buyers will compete in a second-price auction and the seller will receive p (there will be a budget surplus here.) Similarly if the sellers are on the long side they will compete to sell with the buyer paying p and again a surplus.

A fixed-price mechanism is no longer optimal.  The reason is that we can now use competition among buyers and sellers and “price discovery.”  A simple mechanism (but not the optimal one) is a double auction.  The buyers play a second-price auction between themselves, the sellers play a second-price reverse auction between themselves. The winner of the two auctions have won the right to trade. They will trade if and only if the second highest buyer value (which is what the winning buyer will pay) exceeds the second-lowest seller value (which is what the winning seller will receive.)  This ensures that there will be no deficit.  There might be a surplus, which would have to be burned.

This mechanism is DSIC and never runs a deficit.  It is not optimal however because it only sells one unit.  But it has the viture of allowing the “price” to adjust based on “supply and demand.”  Still, there is no welfare ranking between this mechanism and a fixed-price mechanism because a fixed price mechanism will sometimes trade two units (if the price was chosen fortuitously) and sometimes trade no units (if the price turned out too high or low) even though the price discovery mechanism would have traded one.

But here is a mechanism that dominates both.  It’s a hybrid of the two. We fix a price p and we interleave the rules of the fixed-price mechanism and the double auction in the following order

1. First check if we can clear two trades at price p.  If so, do it and we are done.
2. If not, then check if we can sell one unit by the double auction rules.  If so, do it and we are done.
3. Finally, if no trades were executed using the previous two steps then return to the fixed-price and see if we can execute a single trade using it.

I believe this mechanism is DSIC (exercise for the reader, the order of execution is crucial!).  It never runs a deficit and it generates more trade than either standalone mechanism: fixed-price or double auction.

Very interesting research question:  is this a second-best mechanism?  If not, what is?  If so, how do you generalize it to markets with an arbitrary number of buyers and sellers?

A buyer and a seller negotiating a sale price.  The buyer has some privately known value and the seller has some privately known cost and with positive probability there are gains from trade but with positive probability the seller’s cost exceeds the buyers value.  (So this is the Myerson-Satterthwaite setup.)

Do three treatments.

1. The experimenter fixes a price in advance and the buyer and seller can only accept or reject that price.  Trade occurs if and only if they both accept.
2. The seller makes a take it or leave it offer.
3. The parties can freely negotiate and they trade if and only if they agree on a price.

Theoretically there is no clear ranking of these three mechanisms in terms of their efficiency (the total gains from trade realized.)  In practice the first mechanism clearly sacrifices some efficiency in return for simplicity and transparency.  If the price is set right the first mechanism would outperform the second in terms of efficiency due to a basic market power effect.  In principle the third treatment could allow the parties to find the most efficient mechanism, but it would also allow them to negotiate their way to something highly inefficient.

A conjecture would be that with a well-chosen price the first mechanism would be the most efficient in practice.   That would be an interesting finding.

A variation would be to do something similar but in a public goods setting.  We would again compare simple but rigid mechanisms with mechanisms that allow for more strategic behavior.  For example, a version of mechanism #1 would be one in which each individual was asked to contribute an equal share of the cost and the project succeeds if and only if all agree to their contributions.  Mechanism #3 would allow arbitrary negotation with the only requirement be that the total contribution exceeds the cost of the project.

In the public goods setting I would conjecture that the opposite force is at work.  The scope for additional strategizing (seeding, cajoling, guilt-tripping, etc) would improve efficiency.

Anybody know if anything like these experiments have been done?

Kobe Bryant was recently fined $100,000 for making a homophobic comment to a referee. Ryan O’Hanlon writing for The Good Men Project blog puts it into perspective: • It’s half as bad as conducting improper pre-draft workouts. • It’s twice as bad as saying you want to leave the NBA and go home. • It’s just as bad as talking about the collective bargaining agreement. • It’s twice as bad as saying one of your players used to smoke too much weed. • It’s just as bad as writing a letter in Comic Sans about a former player. • It’s just as bad as saying you want to sign the best player in the NBA. • It’s four times as bad as throwing a towel to distract a guy when he’s shooting free throws. • It’s four times as bad as kicking a water bottle. • It’s 10 times as bad as standing in front of your bench for an extended period of time. • It’s 10 times as bad as pretending to be shot by a guy who once brought a gun into a locker room. • It’s 13.33 times as bad as tweeting during a game. • It’s five times as bad as throwing a ball into the stands. • It’s four times as bad as throwing a towel into the stands. • It’s twice as bad as lying about smelling like weed and having women in a hotel room during the rookie orientation program. • It’s one-fifth as bad as snowboarding. That’s based on a comparison of the fines that the various misdeeds earned. The “n times as bad” is the natural interpretation of the fines since we are used to thinking of penalties as being chosen to fit the crime. But NBA justice needn’t conform to our usual intuitions because this is an employer/employee relationship governed by actual contract, not just social contract. We could try to think of these fines as part of the solution to a moral hazard problem. Independent of how “bad” the behaviors are, there are some that the NBA wants to discourage and fines are chosen in order to get the incentives right. But that’s a problematic interpretation too. From the moral hazard perspective the optimal fine for many of these would be infinite. Any finite fine is essentially a license to behave badly as long as the player has a strong enough desire to do so. Strong enough to outweigh the cost of the fine. You can’t throw a towel to distract a guy when he’s shooting free throws unless its so important to you that you are willing to pay$250,000 for the privilege.

You can rescue moral hazard as an explanation in some cases because if there is imperfect monitoring then the optimal fine will have to be finite.  Because with imperfect monitoring the fine cannot be a perfect deterrent.  For example it may not possible to detect with certainty that you were lying about smelling like weed and having women in a hotel room during the rookie orientation program.  If so then the false positives will have to be penalized.  And when the fine will be paid with positive probability even with players on their best behavior you are now trading off incentives vs. risk exposure.

But the imperfect monitoring story can’t explain why Comic Sans doesn’t get an infinite fine, purifying the game of that transgression once and for all.  Or tweeting, or snowboarding or most of the others as well.

It could be that the NBA knows that egregious fines can be contested in court or trigger some other labor dispute. This would effectively put a cap on fines at just the level where it is not worth the player’s time and effort to dispute it.  But that doesn’t explain why the fines are not all pegged at that cap.  It could be that the likelihood that a fine of a given magnitude survives such a challenge depends on the public perception of the crime .  That could explain some of the differences but not many.  Why is the fine for saying you want to leave the NBA larger than the fine for throwing a ball into the stands?

Once we’ve dispensed with those theories it just might be that the NBA recognizes that players simply want to behave badly sometimes. Without that outlet something else is going to give.  Poor performance perhaps or just an eventual Dennis Rodman.  The NBA understands that a fine is a price.  And with the players having so many ways of acting out to choose from, the NBA can use relative prices to steer them to the efficient frontier.  Instead of kicking a water bottle, why not get your frustrations out by sending 3 1/2 tweets during the game? Instead of saying that one of your players smokes too much weed, go ahead and indulge your urge to stand out in front of the bench for an extended period of time. You can do it for 5 times as long as the last guy or even stand 5 times farther out.

Not surprisingly, all of these choices start to look like real bargains compared to snowboarding and impoper pre-draft workouts.

The opening gambit of the book is surprisingly simple: If you were sentenced to five years in prison but had the option of receiving lashes instead, what would you choose? You would probably pick flogging. Wouldn’t we all?

I propose we give convicts the choice of the lash at the rate of two lashes per year of incarceration. One cannot reasonably argue that merely offering this choice is somehow cruel, especially when the status quo of incarceration remains an option. Prison means losing a part of your life and everything you care for. Compared with this, flogging is just a few very painful strokes on the backside. And it’s over in a few minutes. Often, and often very quickly, those who said flogging is too cruel to even consider suddenly say that flogging isn’t cruel enough. Personally, I believe that literally ripping skin from the human body is cruel. Even Singapore limits the lash to 24 strokes out of concern for the criminal’s survival. Now, flogging may be too harsh, or it may be too soft, but it really can’t be both.

The article is an excellent example of how considering an alternative (flogging replacing prison) which despite being non-serious still makes you think about the status quo in a new way.

If we could calibrate the number of lashes so as to create an equal disincentive but at a tiny fraction of the cost that should be a Pareto improvement right? Somehow that doesn’t seem right.  I think the thought experiment reveals that one important part of incarceration is just to prevent the criminal from committing more crimes.

If N lashes is just as unpleasant as 1 year in prison what exactly does that mean? It says that N lashes plus whatever I decide to do during the next year is just as unpleasant as being shut in for a year.  It will quite often be that the pivotal comparison is between prison and N lashes plus another year worth of crime.  In that case we certainly don’t have a Pareto improvement.

(hoodhi:  The Browser.)

This is the third and final post on ticket pricing motivated by the new restaurant Next in Chicago and proprietors Grant Achatz and Nick Kokonas new ticket policy.   In the previous two installments I tried to use standard mechanism design theory to see what comes out when you feed in some non-standard pricing motives having to do with enhancing “consumer value.”  The two attempts that most naturally come to mind yielded insights but not a useful pricing system. Today the third time is the charm.

Things start to come in to place when we pay close attention to this part of Nick’s comment to us:

we never want to invert the value proposition so that customers are paying a premium that is disproportionate to the amount of food / quality of service they receive.

I propose to formalize this as follows.  From the restaurant’s point of view, consumer surplus is valuable but some consumers are prepared to bid even more than the true value of the service they will get.  The restaurant doesn’t count these skyscraping bids as actually reflecting consumer surplus and they don’t want to tailor their mechanism to cater to them.  In particular, the restaurant distinguishes willingness to pay from “value.”

I can think of a number of sensible reasons they would take this view.  They might know that many patrons overestimate the value of a seating at Next. Indeed the restaurant might worry that high prices by themselves artificially inflate willingness to pay.  They don’t want a bubble.  And they worry about their reputation if someone pays $1700 for a ticket, gets only$1000 worth of value and publicly gripes.  Finally they might just honestly believe that willingness to pay is a poor measure of welfare especially when comparing high versus low.

Whatever the reason, let’s run with it.  Let’s define $W(v)< v$ to be the value, as the restaurant perceives it, that would be realized by service to a patron whose willingness to pay is $v$.  One natural example would be

$W(v) = \min \{v, \bar v\}$

where $\bar v$ is some prespecified “cap.”  It would be like saying that nobody, no matter how much they say they are willing to pay, really gets a value larger than, say $\bar v = \1000$ from eating at Next.

Now let’s consider the optimal pricing mechanism for a restaurant that maximizes a weighted sum of profit and consumer’s surplus, where now consumer’s surplus is measured as the difference between $W(v)$ and whatever price is paid. The weight on profit is $\alpha$ and the weight on consumer surplus is $1- \alpha$.  After you integrate by parts you now get the following formula for virtual surplus.

$(1 - \alpha) W(v) + (2 \alpha - 1) [v - \frac{1-F(v)}{f(v)} ]$

And now we have something!  Because  if $\alpha$ is between $0$ and $1/2$ then the first term is increasing in $v$ (up to the cap $\bar v$) and the second term is decreasing.  For $\alpha$ close enough to $1/2$, the overall virtual surplus is going to be first increasing and then decreasing.  And that means that the optimal mechanism is something new.  When bids are in the low to moderate range, you use an auction to decide who gets served.  But above some level, high bidders don’t get any further advantage and they are all lumped together.

The optimal mechanism is a hybrid between an auction and a lottery.  It has no reserve price (over and above the cost of service) so there are never empty seats. It earns profits but eschews exorbitant prices.

It has clear advantages over a fixed price.  A fixed price is a blunt instrument that has to serve two conflicting purposes.  It has to be high enough to earn sufficient revenue on dates when demand is high enough to support it, but it can’t be too high that it leads to empty seats on dates when demand is lower. An auction with rationing at the top is flexible enough to deal with both tasks independently.  When demand is high the fixed price (and rationing) is in effect. When demand is low the auction takes care of adjusting the price downward to keep the restaurant full.  The revenue-enhancing effects of low prices is an under-appreciated benefit of an auction.  Finally, it’s an efficient allocation system for the middle range of prices so scalping motivations are reduced compared to a fixed price.

Incentives for scalping are not eliminated altogether because of the rationing at the top. This can be dealt with by controlling the resale market.  Indeed here is one clear message that comes out of all of this.  Whatever motivation the restaurant has for rationing sales, it is never optimal to allow unfettered resale of tickets.  That only undermines what you were trying to achieve.  Now Grant Achatz and Nick Kokonas understand that but they are forced to condone the Craigslist market because by law non-refundable tickets must be freely transferrable.

But the cure is worse than the disease.  In fact refundable tickets are your friend. The reason someone wants to return their ticket for a refund is that their willingness to pay has dropped below the price. But there is somebody else with a willingness to pay that is above the price.  We know this for sure because tickets are being rationed at that price. Granting the refund allows the restaurant to immediately re-sell it to the next guy waiting in line. Indeed, a hosted resale market would enable the restaurant to ensure that such transactions take place instantaneously through an automated system according to the same terms under which tickets were originally sold.

Someone ought to try this.

Restaurants, touring musicians, and sports franchises are not out to gouge every last penny out of their patrons.  They want patrons to enjoy their craft but also to come away feeling like they didn’t pay an arm and a leg.  Yesterday I tried to formalize this motivation as maximizing consumer surplus but that didn’t give a useful answer. Maximizing consumer surplus means either complete rationing (and zero profit) or going all the way to an auction (a more general argument why appears below.)  So today I will try something different.

Presumably the restaurant cares about profits too.  So it makes sense to study the mechanism that maximizes a weighted sum of profits and consumer’s surplus. We can do that.  Standard optimal mechanism design proceeds by a sequence of mathematical tricks to derive a measure of a consumer’s value called virtual surplus.  Virtual surplus allows you to treat any selling mechanism you can imagine as if it worked like this

1. Consumers submit “bids”
2. Based on the bids received the seller computes the virtual surplus of each consumer.
3. The consumer with the highest virtual surplus is served.

If you write down the optimal mechanism design problem where the seller puts weight $\alpha$ on profits and weight $1 - \alpha$ on consumer surplus, and you do all the integration by parts, you get this formula for virtual surplus.

$\alpha v + (1 - 2\alpha) \frac{1 - F(v)}{f(v)}$

where $v$ is the consumer’s willingness to pay, $F(v)$ is the proportion of consumers with willingness to pay less than $v$ and $f(v)$ is the corresponding probability density function.   That last ratio is called the (inverse) hazard rate.

As usual, just staring down this formula tells you just about everything you want to know about how to design the pricing system.  One very important thing to know is what to do when virtual surplus is a decreasing function of $v$. If we have a decreasing virtual surplus then we learn that it’s at least as important to serve the low valuation buyers as those with high valuations (see point 3 above.)

But here’s a key observation: its impossible to sell to low valuation buyers and not also to high valuation buyers because whatever price the former will agree to pay the latter will pay too.  So a decreasing virtual surplus means that you do the next best thing: you treat high and low value types the same. This is how rationing becomes part of an optimal mechanism.

For example, suppose the weight on profit $\alpha$ is equal to $0$. That brings us back to yesterday’s problem of just maximizing consumer surplus. And our formula now tells us why complete rationing is optimal because it tells us that virtual surplus is just equal to the hazard rate which is typically monotonically decreasing. Intuitively here’s what the virtual surplus is telling us when we are trying to maximize consumer surplus. If we are faced with two bidders and one has a higher valuation than the other, then to try to discriminate would require that we set a price in between the two. That’s too costly for us because it would cut into the consumer surplus of the eventual winner.

So that’s how we get the answer I discussed yesterday.  Before going on I would like to elaborate on yesterday’s post based on correspondence I had with a few commenters, especially David Miller and Kane Sweeney. Their comments highlight two assumptions that are used to get the rationing conclusion:  monotone hazard rate, and no payments to non-buyers.  It gets a little more technical than usual so I am going to put it here in an addendum to yesterday (scroll down for the addendum.)

Now back to the general case we are looking at today, we can consider other values of $\alpha$

An important benchmark case is $\alpha = 1/2$ when virtual surplus reduces to just $v$, now monotonically increasing.  That says that a seller who puts equal weight on profits and consumer surplus will always allocate to the highest bidder because his virtual surplus is higher.  An auction does the job, in fact a second price auction is optimal.  The seller is implementing the efficient outcome.

More interesting is when $\alpha$ is between $0$ and $1/2$. In general then the shape of the virtual surplus will depend on the distribution $F$, but the general tendency will be toward either complete rationing or an efficient auction.  To illustrate, suppose that willingness to pay is distributed uniformly from $0$ to $1$. Then virtual suplus reduces to

$(3 \alpha - 1) v + (1 - 2 \alpha)$

which is either decreasing over the whole range of $v$ (when $\alpha \leq 1/3$), implying complete rationing or increasing over the whole range (when $\alpha > 1/3$), prescribing an auction.

Finally when $\alpha > 1/2$ virtual surplus is the difference between an increasing function and a decreasing function and so it is increasing over the whole range and this means that an auction is optimal (now typically with a reserve price above cost so that in return for higher profits the restaurant lives with empty tables and inefficiency.  This is not something any restaurant would choose if it can at all avoid it.)

What do we conclude from this?  Maximizing a weighted sum of consumer surplus and profit yields again yields one of two possible mechanisms: complete rationing or an auction.  Neither of these mechanisms seem to fit what Nick Kokonas was looking for in his comment to us and so we have to go back to the drawing board again.

Tomorrow I will take a closer look and extract a more refined version of Nick’s objective that will in fact produce a new kind of mechanism that may just fit the bill.

Addendum: Check out these related papers by Bulow and Klemperer (dcd: glen weyl) and by Daniele Condorelli.

Last week, in response to our proposal for how to run a ticket market, Nick Kokonas of Next Restaurant wrote something interesting.

Simply, we never want to invert the value proposition so that customers are paying a premium that is disproportionate to the amount of food / quality of service they receive. Right now we have it as a great bargain for those who can buy tickets. Ideally, we keep it a great value and stay full.

Economists are not used to that kind of thinking and certainly not accustomed to putting such objectives into our models, but we should.  Many sellers share Nick’s view and the economist’s job is to show the best way to achieve a principal’s objective, whatever it may be.  We certainly have the tools to do it.

Here’s an interesting observation to start with.  Suppose that we interpret Nick as wanting to maximize consumer surplus.  What pricing mechanism does that? A fixed price has the advantage of giving high consumer surplus when willingness to pay is high.  The key disadvantage is rationing:  a fixed price has no way of ensuring that the guy with a high value and therefore high consumer surplus gets served ahead of a guy with a low value.

By contrast an auction always serves the guy with the highest value and that translates to higher consumer surplus at any given price.  But the competition of an auction will lead to higher prices.  So which effect dominates?

Here’s a little example. Suppose you have two bidders and each has a willingness to pay that is distributed according the uniform distribution on the interval $[0,1]$.  Let’s net out the cost of service and hence take that to be zero.

If you use a rationing system, each bidder has a 50-50 chance of winning and paying nothing (i.e. paying the cost of service.)  So a bidder whose value for service is $v$ will have expected consumer surplus equal to $v/2$.

If instead you use an auction, what happens?  First, the highest bidder will win so that a bidder with value $v$ wins with probability $v$.  (That’s just the probability that his opponent had a lower value.)  For bidders with high values that is going to be higher than the 50-50 probability from the rationing system. That’s the benefit of an auction.

However he is going to have to pay for it and his expected payment is $v/2$. (The simplest way to see this is to consider a second-price auction where he pays his opponent’s bid.  His opponent has a dominant strategy to bid his value, and with the uniform distribution that value will be $v/2$ on average conditional on being below $v$.)  So his consumer surplus is only

$v (v - v/2) = v^{2}/2$

because when he wins his surplus is his value minus his expected payment $v- v/2$, and he wins with probability $v$.

So in this example we see that, from the point of view of consumer surplus, the benefits of the efficiency of an auction are more than offset by the cost of higher prices.  But this is just one example and an auction is just one of many ways we could think of improving upon rationing.

However, it turns out that the best mechanism for maximizing consumer surplus is always complete rationing (I will prove this as a part of a more general demonstration tomorrow.)  Set price equal to marginal cost and use a lottery (or a queue) to allocate among those willing to pay the price.  (I assume that the restaurant is not going to just give away money.)

What this tells us is that maximizing consumer surplus can’t be what Nick Kokonas wants.  Because with the consumer surplus maximizing mechanism, the restaurant just breaks even.  And in this analysis we are leaving out all of the usual problems with rationing such as scalping, encouraging bidders with near-zero willingness to pay to submit bids, etc.

So tomorrow I will take a second stab at the question in search of a good theory of pricing that takes into account the “value proposition” motivation.

Addendum:  I received comments from David Miller and Kane Sweeney that will allow me to elaborate on some details.  It gets a little more technical than the rest of these posts so you might want to skip over this if you are not acquainted with the theory.

David Miller reminded me of a very interesting paper by Ran Shao and Lin Zhou.  (See also this related paper by the same authors.) They demonstrate a mechanism that achieves a higher consumer surplus than the complete rationing mechanism and indeed that achieves the highest consumer surplus among all dominant-strategy, individually rational mechanisms.

Before going into the details of their mechanism let me point out the difference between the question I am posing and the one they answer.   In formal terms I am imposing an additional constraint, namely that the restaurant will not give money to any consumer who does not obtain a ticket.  The restaurant can give tickets away but it won’t write a check to those not lucky enough to get freebies.  This is the right restriction for the restaurant application for two reasons.  First if the restaurant wants to maximize consumer surplus its because it wants to make people happy about the food they eat, not happy about walking away with no food but a payday.  Second, as a practical matter a mechanism that gives away money is just going to attract non-serious bidders who are looking for a handout.

In fact Shao and Zhou are starting from a related but conceptually different motivation: the classical problem of bilateral trade between two agents.  In the most natural interpretation of their model the two bidders are really two agents negotiating the sale of an object that one of them already owns.  Then it makes sense for one of the agents to walk away with no “ticket” but a paycheck.  It means that he sold the object to the other guy.

Ok with all that background here is their mechanism in its simplest form.  Agent 1 is provisionally allocated the ticket (so he becomes the seller in the bilateral negotiation.) Agent 2 is given the option to buy from agent 1 at a fixed price.  If his value is above that price he buys and pays to agent 1.  Otherwise agent 1 keeps the ticket and no money changes hands.  (David in his comment described a symmetric version of the mechanism which you can think of as representing a random choice of who will be provisionally allocated the ticket.  In our correspondence we figured out that the payment scheme for the symmetric version should be a little different, it’s an exercise to figure out how.  But I didn’t let him edit his comment. Ha Ha Ha!!!)

In the uniform case the price should be set at 50 cents and this gives a total surplus of 5/8, outperforming complete rationing. Its instructive to understand how this is accomplished.  As I pointed out, an auction takes away consumer surplus from high-valuation types.  But in the Shao-Zhou framework there is an upside to this.  Because the money extracted will be used to pay off the other agent, raising his consumer surplus.  So you want to at least use some auction elements in the mechanism.

One common theme in my analysis and theirs is in fact a deep and under-appreciated result.  You never want to “burn money.”  Using an auction is worse than complete rationing because the screening benefits of pricing is outweighed by the surplus lost due to the payments to the seller.  Using the Shao-Zhou mechanism is optimal precisely becuase it finds a clever way to redirect those payments so no money is burned.  By the way this is also an important theme in David Miller’s work on dynamic mechanisms. See here and here.

Finally, we can verify that the Shao-Zhou mechanism would no longer be optimal if we adapted it to satisfy the constraint that the loser doesnt receive any money.  It’s easy to do this based on the revenue equivalence theorem.  In the Shao-Zhou mechanism an agent with zero value gets expected utility equal to 1/8 due to the payments he receives. We can subtract utility of 1/8 from all types and obtain an incentive-compatible mechanism with the same allocation rule.  This would be just enough to satisfy my constraint.  And then the total surplus will be 5/8-2/8 = 3/8 which is less than the 1/2 of the complete rationing mechanism.  That’s another expression of the losses associated with using even the very crude screening in the Shao-Zhou mechanism.

Next let me tell you about my correspondence with Kane Sweeney.  He constructed a simple example where an auction outperforms rationing.  It works like this.  Suppose that each bidder either had a very low willingness to pay, say 50 cents, or a very high willingness to pay, say $1,000. If you ration then expected surplus is about$500. Instead you could do the following.  Run a second-price auction with the following modification to the rules.  If both bid $1000 then toss a coin and give the ticket to the winner at a price of$1.  This mechanism gives an expected surplus of about $750. Basically this type of example shows that the monotone hazard rate assumption is important for the superiority of rationing. To see this, suppose that we smooth out the distribution of values so that types between 50 cents and$1000 have very small positive probability.  Then the hazard rate is first increasing around 50 cents and then decreasing from 50 cents all the way to \$1000.  So you want to pool all the types above 50 cents but you want to screen out the 50-cent types.  That’s what Kane’s mechanism is doing.

I would interpret Kane’s mechanism as delivering a slightly nuanced version of the rationing message.  You want to screen out the non-serious bidders but ration among all of the serious bidders.

A former academic economist and game theorist is now the Chief Economic Advisor in the Ministry of Finance in India.  His name is Kaushik Basu. Via MR, here is a policy paper he has just written advising that the giving of bribes should be de-criminalized.

The paper puts forward a small but novel idea of how we can cut down the incidence of bribery. There are different kinds of bribes and what this paper is concerned with are bribes that people often have to give to get what they are legally entitled to. I shall call these ―harassment bribes.‖ Suppose an income tax refund is held back from a taxpayer till he pays some cash to the officer. Suppose government allots subsidized land to a person but when the person goes to get her paperwork done and receive documents for this land, she is asked to pay a hefty bribe. These are all illustrations of harassment bribes. Harassment bribery is widespread in India and it plays a large role in breeding inefficiency and has a corrosive effect on civil society. The central message of this paper is that we should declare the act of giving a bribe in all such cases as legitimate activity. In other words the giver of a harassment bribe should have full immunity from any punitive action by the state.

This is not just crazy talk, there is some logic behind it fleshed out in the paper. If giving a bribe is forgiven but demanding a bribe remains a crime, then citizens forced to pay bribes for routine government services will have an incentive to report the bribe to the authorities.  This will discourage harrassment bribery.

The obvious question is whether the bribe-enforcement authority will itself demand bribes.  To whom does a citizen report having given a bribe to the bribe authority? At some point there is a highest bribe authority and it can demand bribes with impunity.  With that power they can extract all of the reporter’s gains by demanding it as a bribe.

Worse still they can demand an additional bribe from the original harasser in return for exonerating her. The effect is that the harasser sees only a fraction of the return on her bribe demands. This induces her to ask for even higher bribes.  Higher bribes means fewer citizens are able to pay them and fewer citizens receive their due government services.

The bottom line is that in an economy run on bribes you want to make the bribes as efficient as possible.  That may mean encouraging them rather than discouraging them.

There are a few basic features that Grant Achatz and Nick Kokonas should build into their online ticket sales.  First, you want a good system to generate the initial allocation of tickets for a given date, second you want an efficient system for re-allocating tickets as the date approaches.  Finally, you want to balance revenue maximization against the good vibe that comes from getting a ticket at a non-exorbitatnt price.

1. Just like with the usual reservation system, you would open up ticket sales for, say August 1, 3 months in advance on May 1.   It is important that the mechanism  be transparent, but at the same time understated so that the business of selling tickets doesn’t draw attention away from the main attractions: the restuarant and the bar.  The simple solution is to use a sealed bid N+1st price auction.  Anyone wishing to buy a ticket for August 1 submits a bid.  Only the restaurant sees the bid.  The top 100 bidders get tickets and they pay a price equal to the 101st highest bid.  Each bidder is informed whether he won or not and the final price. With this mechanism it is a dominant strategy to bid your true maximal willingness to pay so the auction is transparent, and all of the action takes place behind the scenes so the auction won’t be a spectacle distracting from the overall reputation of the restaurant.
2. Next probably wants to allow patrons to buy at lower prices than what an auction would yield.  That makes people feel better about the restaurant than if it was always trying to extract every last drop of consumer’s surplus. Its easy to work that into the mechanism. Decide that 50 out of 100 seats will be sold to people at a fixed price and the remainder will be sold by auction. The 50 lucky people will be chosen randomly from all of those whose bid was at least the fixed price.  The division between fixed-price and auction quantities could easily be adjusted over time, for different days of the week, etc.
3. The most interesting design issue is to manage re-allocation of tickets. This is potentially a big deal for a restaurant like Next because many people will be coming from out of town to eat there. Last-minute changes of plans could mean that rapid re-allocation of tickets will have a big impact on efficiency. More generally, a resale market raises the value of a ticket because it turns the ticket into an option.  This increases the amount people are willing to bid for it.  So Next should design an online resale market that maximizes the efficiency of the allocation mechanism because those efficiency gains not only benefit the patrons but they also pay off in terms of initial ticket sales.
4. But again you want to minimize the spectacle.  You don’t want Craigslist. Here is a simple transparent system that is again discreet.  After the original allocation of tickets by auction, anyone who wishes to purchase a ticket for August 1 submits their bid to the system.  In addition, anyone currently holding a ticket for August 1 has the option of submitting a resale price to the system. These bids are all kept secret internally in the system. At any moment in which the second highest bid exceeds the second lowest resale price offered, a transaction occurs.  In that transaction the highest bidder buys the ticket and pays the second-highest bid.  The seller who offered the lowest price sells his ticket and receives the second lowest price.
5. That pricing rule has two effects.  First, it makes it a dominant strategy for buyers to submit bids equal to their true willingness to pay and for sellers to set their true reserve prices. Second, it ensures that Next earns a positive profit from every sale equal to the difference between the second-highest bid and the second-lowest resale price.  In fact it can be shown that this is the system that maximizes the efficiency of the market subject to the constraint the market is transparent (i.e. dominant strategies) and that Next does not lose money from the resale market.
6. The system can easily be fine-tuned to give Next an even larger cut of the transactions gains, but a basic lesson of this kind of market design is that Next should avoid any intervention of that sort.  Any profits earned through brokering resale only reduces the efficiency of the resale market.  If Next is taking a cut then a trade will only occur if the gains outweigh Next’s cut. Fewer trades means a less efficient resale market and that means that a ticket is a less flexible asset.  The final result is that whatever profits are being squeezed out of the resale market are offset by reduced revenues from the original ticket auction.
7. The one exception to the latter point is the people who managed to buy at the fixed price. If the goal was to give those people the gift of being able to eat at Next for an affordable price and not to give them the gift of being able to resell to high rollers, then you would offer them only the option to sell back their ticket at the original price (with Next either selling it again at the fixed price or at the auction price, pocketing the spread.)  This removes the incentive for “scalpers” to flood the ticket queue, something that is likely to be a big problem for the system currently being used.
8. A huge benefit of a system like this is that it makes maximal use of information about patrons’ willingness to pay and with minimal effort. Compare this to a system where Next tries to gauge buyer demand over time and set the market clearing price.  First of all, setting prices is guesswork.  An auction figures out the price for you. Second, when you set prices you learn very little about demand.  You learn only that so many people were willing to pay more than the price.  You never find out how much more than that price people would have been willing to pay.  A sealed bid auction immediately gives you data on everybody’s willingness to pay. And at every moment in time.  That’s very valuable information.

Suppose I want you to believe something and after hearing what I say you can, at some cost, check whether I am telling the truth.  When will you take my word for it and when will you investigate?

If you believe that I am someone who always tells the truth you will never spend the cost to verify.  But then I will always lie (whenever necessary.)  So you must assign some minimal probability to the event that I am a liar in order to have an incentive to investigate and keep me in check.

Now suppose I have different ways to frame my arguments.  I can use plain language or I can cloak them in the appearance of credibility by using sophisticated jargon.  If you lend credibility to jargon that sounds smart, then other things equal you have less incentive to spend the effort to verify what I say.  That means that jargon-laden statements must be even more likely to be lies in order to restore the balance.

(Hence, statistics come after “damned lies” in the hierarchy.)

Finally, suppose that I am talking to the masses.  Any one of you can privately verify my arguments.  But now you have a second, less perfect way of checking. If you look around and see that a lot of other people believe me, then my statements are more credible.  That’s because if other people are checking me and many of them demonstrate with their allegiance that they believe me, it reveals that my statements checked out with those that investigated.

Other things equal, this makes my statements more credible to you ex ante and lowers your incentives to do the investigating.  But that’s true of everyone so there will be a lot of free-riding and too little investigating.  Statements made to the masses must be even more likely to be lies to overcome that effect.

How do you get deadbeat dads to pay child support?  You threaten them with incarceration if they don’t pay.  But if the punishment has its intended effect you will find that the only deadbeats who actually receive the punishment are those for whom the punishment is pointless because they don’t have the money to pay.  They are the turnips.

“Deadbeats,” according to Sorensen, are parents who could pay but choose not to. “Turnips” — invoking the phrase, “You can’t get blood out of a turnip” — are parents who don’t have the money to pay. So what percentage of nonpaying parents are deadbeats and what percentage are turnips? Sorenson says most of those who end up in jail are low-income, and thus, “more likely to be a turnip than a deadbeat.”

Is that a bug or a feature?  That’s part of what the Supreme Court will decide in a case that was argued last week.

This is an easy one: North Korea thinks (1) the US is out to exploit and steal resources from other countries and hence (2)  Libya was foolish to giving away its main weapon, its nascent nuclear arsenal, which acted as a deterrent to American ambition. Accordingly,

“The truth that one should have power to defend peace has been confirmed once again,” the [North Korean] spokesperson was quoted as saying, as he accused the U.S. of having removed nuclear arms capabilities from Libya through negotiations as a precursor to invasion.

“The Libyan crisis is teaching the international community a grave lesson,” the spokesperson was quoted as saying, heaping praise on North Korea’s songun, or military-first, policy.

In a perceptive analysis, Professor Ruediger Franks adds two more examples that inform North Korean doctrine.  Gorbachev’s attempts to modernize the Soviet Union led to its collapse and the emancipation of its satellite states.  Saddam’s agreement to allow a no-fly zone after Gulf War I led inexorably to Gulf War II and his demise.  The lesson: Get more nuclear arms and do not accede to any US demands.

Is there a solution that eliminates nuclear proliferation?  Such a solution would have to convince North Korea that their real and perceived enemies are no more likely to attack even if they know North Korea does not have a nuclear deterrent.  Most importantly, the US would have to eliminate North Korean fear of American aggression.  In a hypothetical future where the North Korean regime has given up its nuclear arsenal, suppose the poor, half-starved citizens of North Korea stage a strike and mini-revolt for food and shelter and the regime strikes back with violence.  Can it be guaranteed that South Korea does not get involved?  Can it be guaranteed that Samantha Power does not urge intervention to President Obama in his second term or Bill Kristol to President Romney in his first? No.  So, we are stuck with nuclear proliferation by North Korea.  The only question is whether North Korea can feel secure with a small arsenal.

Tomas Sjostrom and I offer one option for reducing proliferation in our JPE paper Strategic Ambiguity and Arms Proliferation.  If North Korea can keep the size and maturity of its nuclear arsenal hidden, we can but guess at its size and power.  It might be large or quite small – who knows.  This means even if the arsenal is actually small, North Korea can still pretend it is big and get some of the deterrent power of a large arsenal without actually having it.  The potential to bluff afforded by ambiguity of the size of weapons stockpiles affords strategic power to North Korea.  It reduces North Korea’s incentive to proliferate.  And this in turn can help the U.S. particularly if they do not really want to attack North Korea but fear nuclear proliferation.  Unlike poker and workplace posturing à la Dilbert, nuclear proliferation is not a zero-sum game.  Giving an opponent the room to bluff can actually create a feedback loop that helps other players.

Grading still hangs over me but teaching is done.  So, I finally had time to read Kiyotaki Moore.  It’s been on my pile of papers to read for many, many years.  But it rose to the top because, first,  my PhD teaching allowed me to finally get to Myerson’s bargaining chapter in his textbook and Abreu-Gul’s bargaining with commitment model and, second, because Eric Maskin recommends it as one of his key papers for understanding the financial crisis.  So, some papers in my queue were cleared out and Kiyotaki-Moore leaped over several others.

I see why the paper has over 2000 cites on Google Scholar.

The main propagation mechanism in the model relies on the idea that credit-constrained borrowers borrow against collateral.  The greater the value of collateral “land” , the greater the amount they can borrow.  So, if for some reason next period’s price of land is high, the greater is the amount the borrower can borrow against his land this period.   Suppose there is an unexpected positive shock to the productivity of land.  This increases the value of land and hence its price.  This capital gain increases borrowing.  An increase in the value of land increases economic activity.  It also increases demand for land and hence the price of land.  This can choke off some demand for land.  The more elastic the supply of land, the smaller is the latter dampening effect.  So there can be a significant multiplier to a positive shock to technology.

(Why are borrowers constrained in their borrowing by the value of their land and rather than the NPV of their projects? Kiyotaki-Moore rely on a model of debt of Hart and Moore to justify this constraint.  While Hart-Moore is also in my pile, I did not finally have time to read it.  I did note they have an extremely long Appendix to justify the connection between collateral and borrowing!  The main idea in Hart Moore is that an entrepreneur can always walk away from a project and hold it up.  As his human capital is vital for the project’s success, he will be wooed back in renegotiation.  The Appendix must argue that he captures all the surplus above the liquidation value of the land.  Hence, the lender will only be willing to lend up to value of collateral to avoid hold up.)

But how do we get credit cycles?   As the price of land rises, the entrepreneurs acquire more land.  This increases the price of land.  They also accumulate debt.   The debt constrains their ability to borrow and eventually demand for land declines and its price falls.  A cycle.  Notice that this cycle is not generated by shocks to technology or preferences but arises endogenously as land and debt holdings vary over time!  I gotta think about this part more….

My daughter’s 4th grade class read The Emperor’s New Clothes (a two minute read) and today I led a discussion of the story.  Here are my notes.

The Emperor, who was always to be found in his dressing room, commissioned some new clothes from weavers who claimed to have a magical cloth whose fine colors and patterns would be “invisible to anyone who was unfit for his office, or who was unusually stupid.”

Fast forward to the end of the story. Many of the Emperor’s most trusted advisors have, one by one, inspected the clothes and faced the same dilemma. Each of them could see nothing and yet for fear of being branded stupid or unfit for office each bestowed upon the weavers the most elaborate compliments they could muster.  Finally the Emperor himself is presented with his new clothes and he is shocked to discover that they are invisible only to him.

Am I a fool? Am I unfit to be the Emperor? What a thing to happen to me of all people! – Oh! It’s very pretty,” he said. “It has my highest approval.” And he nodded approbation at the empty loom. Nothing could make him say that he couldn’t see anything.

The weavers have succesfully engineered a herd. For any inspector who doubts the clothes’ authenticity, to be honest and dispel the myth requires him to convince the Emperor that the clothes are invisible to everybody.  That is risky because if the Emperor believes the clothes are authentic (either because he sees them or he thinks he is the only one who does not) then the inspector would be judged unfit for office.  With each successive inspector who declares the clothes to be authentic the evidence mounts, making the risk to the next inspector even greater.  After a long enough sequence no inspector will dare to deviate from the herd, including the Emperor himself.

The clothes and the herd are a metaphor for authority itself.  Respect for authority is sustained only because others‘ respect for authority is thought to be sufficiently strong to support the ouster of any who would question it.

But whose authority?  The deeper lesson of the story is a theory of the firm based on the separation of ownership and management.  Notice that it is the weavers who capture the rents from the environment of mutual fear that they have created.  They show that the optimal use of their asset is to clothe a figurehead in artificial authority and hold him in check by keeping even him in doubt of his own legitimacy.  The herd bestows management authority on the figurehead but ensures that rents flow to the owners who are surreptitiously the true authorities.

The swindlers at once asked for more money, more silk and gold thread, to get on with the weaving. But it all went into their pockets. Not a thread went into the looms, though they worked at their weaving as hard as ever.

The story concludes with a cautionary note.  The herd holds together only because of calculated, self-interested subjects.  The organizational structure is vulnerable if new members are not trained to see the wisdom of following along.

“But he hasn’t got anything on,” a little child said.

“Did you ever hear such innocent prattle?” said its father. And one person whispered to another what the child had said, “He hasn’t anything on. A child says he hasn’t anything on.”

“But he hasn’t got anything on!” the whole town cried out at last.

Herds are fragile because knowledge is contagious.  As the organization matured everyone secretly has come to know that the authority is fabricated. And later everyone comes to know that everyone has secretly come to know that.  This latent higher-order knowledge requires only a seed of public knowledge before it crystalizes into common knowledge that the organization is just a mirage.

And after that, who is the last member to maintain faith in the organization?

The Emperor shivered, for he suspected they were right. But he thought, “This procession has got to go on.” So he walked more proudly than ever, as his noblemen held high the train that wasn’t there at all.

I am always surprised in Spring how suddenly there are cars parked on the residential streets in my town where just a month ago the streets were empty. These are narrow streets so a row of cars turns it into a one-lane street that supposed to handle two-way traffic.  And that is when we have to solve the problem of who enters the narrowed section first when two cars are coming in opposite directions on the street.

On my street cars are only allowed to park on the North side.  So if I am headed West I have to move to the oncoming traffic side to pass the row of parked cars.  If I do that and the car coming in the opposite direction has to stop for just a second or two, the driver will be understanding (a quick royal wave on the way by helps!) But if she has to wait much longer than that she is not going to be happy.  And indeed the convention on my street would have me stop and wait even if I arrive at the bottleneck first.

But of course, from an efficiency point of view it shouldn’t matter which side the cars are parked on.  Total waiting time is minimized by a first-come first-served convention.  And note that there aren’t even distributional conseqeuences because what goes West must go East eventually.

Still the payoff-irrelevant asymmetry seems to matter.  For example, a driver headed West would never complain if he arrives second and is made to wait.  And because of the strict efficiency gains this is not the same as New York on the left, London on the right. The perceived property right makes all the difference. And even I, who understands the efficiency argument, adhere to the convention.

Of course there is the matter of the gap. If the Westbound driver arrives just moments before the Eastbound driver then in fact he is forced to stop because at the other end he will be bottled in.  There won’t be enough room for the Westbound driver to get through if the Eastbound driver has not stopped with enough of a gap.

And once you notice this you see that in fact the efficient convention is very difficult to maintain, especially when it’s a long row of cars.  The efficient convention requires the Westbound driver to be able to judge the speed of the oncoming car as well as the current gap.   And the reaction time of the Eastbound driver is an unobservable variable that will have to be factored in.

That ambiguity means that there is no scope for agreement on just how much of headstart the Eastbound driver should be afforded.  Especially because if he is forced to back up, he will be annoyed with good reason. So for sure the second best will give some baseline headstart to the Eastbound driver.

Then there’s the moral hazard problem.  You can close the gap faster by speeding up a bit on the approach.  And even if you don’t speed up, any misjudgement of the gap raises the suspicion that you did speed up, bolstering the Eastbound driver’s gripe.  Note that the moral hazard problem is not mitigated by a convention which gives a longer headstart to the Eastbound driver.  No matter what the headstart is, in those cases where the headstart is binding the incentive to speed up is there.

All things considered, the property rights convention, while inefficient from a first-best point of view, may in fact be the efficient one when the informational asymmetry and moral hazard problems are taken into account.

An insightful analysis from John Quiggin at Crooked Timber of the organizational economics of Arab dictatorships.

The element of truth is that the Arab monarchies have good prospects of survival if they can manage the transition to constitutional monarchy. And it makes sense for them to do so. After all, a constitutional monarch gets to live, literally, like a king, without having to worry about boring stuff like budgets and foreign affairs. And, in the modern context, the risk that such a setup will be overthrown by a military coup, as happened to quite a few of the postcolonial constitutional monarchs, is much diminished. By contrast, there’s no such thing as a constitutional dictatorship or tyranny and no way to make the transition from President-for-Life to constitutional monarch. That’s not to say all the monarchs in the region will survive, or for that matter, that all the remaining dictatorships will fall. But the general point is valid enough.

With this corollary for Saudi Arabia

The other big problem is that this can’t easily be done in Saudi Arabia. There are not even the forms of a constitutional government to begin with. Worse, the state is not so much a monarchy as an aristocracy/oligarchy saddled with 7000 members of the House of Saud, and many more of the hangers-on that typify such states. These people have a lot to lose, and nothing to gain, from any move in the direction of democracy.

An important role of government is to provide public goods that cannot be provided via private markets. There are many ways to express this view theoretically, a famous one using modern theory is Mailath-Postlewaite.  (Here is a simple exposition.) They consider a public good that potentially benefits many individuals and can be provided at a fixed per-capita cost C.  (So this is a public good whose cost scales proportionally with the size of the population.)

Whatever institution is supposed to supply this public good faces the problem of determining whether the sum of all individual’s values exceeds the cost.  But how do you find out individual’s values?  Without government intervention the best you can do is ask them to put their money where their mouths are.  But this turns out to be hopelessly inefficient.  For example if everybody is expected to pay (at least) an equal share of the cost, then the good will produced only if every single individual has a willingness to pay of at least C.  The probability that happens shrinks to zero exponentially fast as the population grows.  And in fact you can’t do much better than have everyone pay an equal share.

Government can help because it has the power to tax.  We don’t have to rely on voluntary contributions to raise enough to cover the costs of the good. (In the language of mechanism design, the government can violate individual rationality.) But compulsory contributions don’t amount to a free lunch:  if you are forced to pay you have no incentive to truthfully express your true value for the public good.  So government provision of public goods helps with one problem but exacerbates another.  For example if the policy is to tax everyone then nobody gives reliable information about their value and the best government can do is to compare the cost with the expected total value.  This policy is better than nothing but it will often be inefficient since the actual values may be very different.

But government can use hybrid schemes too.  For example, we could pick a representative group in the population and have them make voluntary contributions to the public good, signaling their value.  Then, if enough of them have signaled a high willingness to pay, we produce the good and tax everyone else an equal share of the residual cost.  This way we get some information revelation but not so much that the Mailath Postlewaite conclusion kicks in.

Indeed it is possible to get very close to the ideal mechanism with an extreme version of this.  You set aside a single individual and then ask everyone else to announce their value for the public good.  If the total of these values exceeds the cost you produce the public good and then charge them their Vickrey-Clarke-Groves (VCG) tax.  It is well known that these taxes provide incentives for truthful revelation but that the sum of these taxes will fall short of the cost of providing the public good. Here’s where government steps in.  The singled-out agent will be forced to cover the budget shortfall.

Now obviously this is bad policy and is probably infeasible anyway since the poor guy may not be able to pay that much.  But the basic idea can be used in a perfectly acceptable way.  The idea was that by taxing an agent we lose the ability to make use of information about his value so we want to minimize the efficiency loss associated with that.  Ideally we would like to find an individual or group of individuals who are completely indifferent about the public good and tax them.  Since they are indifferent we don’t need their information so we lose nothing by loading all of the tax burden on them.

In fact there is always such a group and it is a very large group:  everybody who is not yet born.  Since they have no information about the value of a public good provided today they are the ideal budget balancers.  Today’s generation uses the efficient VCG mechanism to decide whether to produce the good and future generations are taxed to make up any budget imbalance.

There are obviously other considerations that come into play here and this is an extreme example contrived to make a point.  But let me be explicit about the point.  Balanced budget requirements force today’s generation to internalize all of the costs of their decisions.  It is ingrained in our senses that this is the efficient way to structure incentives.  For if we don’t internalize the externalities imposed on subsequent generations we will make inefficient decisions.  While that is certainly true on many dimensions, it is not a universal truth.  In particular public goods cannot be provided efficiently unless we offload some of the costs to the next generation.