You are currently browsing the tag archive for the ‘incentives’ tag.
He is expecting regular raises. Not every month, maybe not even every year but he expects a raise and he has his own timetable for when you should give it to him. No matter how hard you try to keep to a fair schedule of raises, uncertainty about his expectations together with other random factors mean that at some point you are going to fall behind.
As time passes and no raise he is going to start slacking off. Maybe just a little bit at first but it’s going to be noticeable. Now from your perspective it just looks like he is not working as hard as when you first hired him. You tell yourself stories about how gardeners start out by working hard to get your business and then slack off over time. You might even consider that maybe he is slacking off because you aren’t giving him a raise but what are you going to do now? You can’t possibly give him a raise and reward him for slacking off. If anything your raise is going to come even later now.
And so he slacks off even more. In fact he has been through this before so the very first slack-off was a big drop because he knew it was the beginning of the end. He’s gonna be fired pretty soon.
There is typically a fine for parking your car on the street facing the wrong direction, i.e. against traffic. What is the harm in that?
Economic theory suggests that penalties should be attached to behaviors that are correlated with crime and not necessarily to criminal behavior itself. For example, price fixing may be impossible to detect, but conspiracy to fix prices may be much easier. It makes sense to make cheap talk a crime even though the talk itself causes no harm.
When you car is parked facing the wrong way its a sure sign that A) you previously committed the crime of driving the wrong way and B) you will soon do it again.
Clearly the reason that sex is so pleasurable is because that motivates us to have a lot of it. It is evolutionarily advantageous to desire the things that make us more fit. Sex feels good, we seek that feeling, we have a lot of sex, we reproduce more.
But that is not the only way to get motivated. It is also advantageous to derive pleasure directly from having children. We see children, we sense the joy we would derive from our own children and we are motivated to do what’s necessary to produce them, even if we had no particular desire for the intermediate act of sex.
And certainly both sources of motivation operate on us, but in different proportions. So it is interesting to ask what determines the optimal mix of these incentives. One alternative is to reward an intermediate act which has no direct effect on fitness but can, subject to idiosyncratic conditions together with randomness, produce a successful outcome which directly increases fitness. Sex is such an act. The other alternative is to confer rewards upon a successful outcome (or penalties for a failure.) That would mean programming us with a desire and love for children.
The tradeoff can be understood using standard intuitions from incentive theory. The rewards are designed to motivate us to take the right action at the right time. The drawback of rewarding only the final outcome is that it may be too noisy a signal of whether he acted. For example, not every encounter results in offspring. If so, then a more efficient use of rewards to motivate an act of sex is to make sex directly pleasurable. But the drawback of rewarding sex directly is that whether it is desirable to have sex right now depends on how likely it is to produce valuable offspring. If we are made to care only about the (value of) offspring we are more likely to make the right decision under the right circumstances.
Now these balance out differently for males than for females. Because when the female becomes pregnant and gives birth that is a very strong signal that she had sex at an opportune time but conveys noisier information about him.That is because, of course, this child could belong to any one of her (potentially numerous) mates. Instilling a love for children is therefore a relatively more effective incentive instrument for her than for him.
As for love of sex, note that the evolutionary value of offspring is different for males than for females because females have a significant opportunity cost given that they get pregnant with one mate at a time. This means that the circumstances are nearly always right for males to have sex, but much more rarely so for females. It is therefore efficient for males to derive greater pleasure from sex.
(It is a testament to my steadfastness as a theorist that I stand firmly by the logic of this argument despite the fact that, at least in my personal experience, females derive immense pleasure from sex.)
That was the title of a very interesting talk at the Biology and Economics conference I attended over the weekend at USC. The authors are Juan Carillo, Isabelle Brocas and Ricardo Alonso. It’s basically a model of how multitasking is accomplished when different modules in the brain are responsible for specialized tasks and those modules require scarce resources like oxygen in order to do their job. (I cannot find a copy of the paper online.)
The brain is modeled as a kludgy organization. Imagine that the listening-to-your-wife division and the watching-the-French-Open division of YourBrainINC operate independently of one another and care about nothing but completing their individual tasks. What happens when both tasks are presented at the same time? In the model there is a central administrator in charge of deciding how to ration energy between the two divisions. What makes this non-trivial is that only the individual divisions know how much juice they are going to need based on the level of difficulty of this particular instance of the task.
Here’s the key perspective of the model. It is assumed that the divisions are greedy: they want all the resources they need to accomplish their task and only the central administrator internalizes the tradeoffs across the two tasks. This friction imposes limits on efficient resource allocation. And these limits can be understood via a mechanism design problem which is novel in that there are no monetary transfers available. (If only the brain had currency.)
The optimal scheme has a quota structure which has some rigidity. There is a cap on the amount of resources a given division can utilize and that cap is determined solely by the needs of the other division. (This is a familiar theme from economic incentive mechanisms.) An implication is that there is too little flexibility in re-allocating resources to difficult tasks. Holding fixed the difficulty of task A, as the difficulty of task B increases, eventually the cap binds. The easy task is still accomplished perfectly but errors start to creep in on the difficult task.
Apparently it’s biology and economics week for me because after Andrew Caplin finishes his fantastic series of lectures here at NU tomorrow, I am off to LA for this conference at USC on Biology, Neuroscience, and Economic Modeling.
Today Andrew was talking about the empirical foundations of dopamine as a reward system. Along the way he reminded us of an important finding about how dopamine actually works in the brain. It’s not what you would have guessed. If you take a monkey and do a Pavlovian experiment where you ring a bell and then later give him some goodies, the dopamine neurons fire not when the actual payoff comes, but instead when the bell rings. Interestingly, when you ring the bell and then don’t come through with the goods there is a dip in dopamine activity that seems to be associated with the letdown.
The theory is that dopamine responds to changes in expectations about payoffs, and not directly to the realization of those payoffs. This raises a very interesting theoretical question: why would that be Nature’s most convenient way to incentivize us? Think of Nature as the principal, you are the agent. You have decision-making authority because you know what choices are available and Nature gives you dopamine bonuses to guide you to good decisions. Can you come up with the right set of constraints on this moral hazard problem under which the optimal contract uses immediate rewards for the expectation of a good outcome rather than rewards that come later when the outcome actually obtains?
Here’s my lame first try, based on discount factors. Depending on your idiosyncratic circumstances your survival probability fluctuates, and this changes how much you discount the expectation of future rewards. Evolution can’t react to these changes. But if Nature is going to use future rewards to motivate your behavior today she is going to have to calibrate the magnitude of those incentive payments to your discount factor. The fluctuations in your discount factor make this prone to error. Immediate payments are better because they don’t require Nature to make any guesses about discounting.
Andrew Caplin is visiting Northwestern this week to give a series of lectures on psychology and economics. Today he talked about some of his early work and briefly mentioned an intriguing paper that he wrote with Kfir Eliaz.
Too few people get themselves tested for HIV infection. Probably this is because the anxiety that would accompany the bad news overwhelms the incentive to get tested in the hopes of getting the good news (and also the benefit of acting on whatever news comes out.) For many people, if they have HIV they would much rather not know it.
How do you encourage testing when fear is the barrier? Caplin and Eliaz offer one surprisingly simple, yet surely controversial possibility: make the tests less informative. But not just any old way. Because we want to maintain the carrot of a positive result but minimize the deterrent of a negative result. Now we could try outright deception by certifying everyone who tests negative but give no information to those who test positive. But that won’t fool people for long. Anyone who is not certified will know he is positive and we are back to the anxiety deterrent.
But even when we are bound by the constraint that subjects will not be fooled there is a lot of freedom to manipulate the informativeness of the test. Here’s how to ramp down the deterrent effect of bad result without losing much of the incentive effects of a good result. A patient who is tested will receive one of two outcomes: a certification that he is negative or an inconclusive result. The key idea is that when the patient is negative the test will be designed to produce an inconclusive result with positive probability p. (This could be achieved by actually degrading the quality of the test or just withholding the result with positive probability.)
Now a patient who receives an inconclusive result won’t be fooled. He will become more pessimistic, that is inevitable. But only slightly more pessimistic. The larger we choose p (the key policy instrument) the less scary is an inconclusive result. And no matter what p is, a certification that the patient is HIV-negative is a 100% certification. There is a tradeoff that arises, of course, and that is that high p means that we get the good news less often. But it should be clear that some p, often strictly between 0 and 1, would be optimal in the sense of maximizing testing and minimizing infection.
In the New Yorker, Lawrence Wright discusses a meeting with Hamid Gul, the former head of the Pakistani secret service I.S.I. In his time as head, Gul channeled the bulk of American aid in a particular direction:
I asked Gul why, during the Afghan jihad, he had favored Gulbuddin Hekmatyar, one of the seven warlords who had been designated to receive American assistance in the fight against the Soviets. Hekmatyar was the most brutal member of the group, but, crucially, he was a Pashtun, like Gul.
Gul offered a more principled rationale for his choice: “I went to each of the seven, you see, and I asked them, ‘I know you are the strongest, but who is No. 2?’ ” He formed a tight, smug smile. “They all said Hekmatyar.”
Gul’s mechanism is something like the following: Each player is allowed to cast a vote for everyone but himself. The warlord who gets the most votes gets a disproportionate amount of U.S. aid.
By not allowing a warlord to vote for himself, Gul eliminates the warlord’s obvious incentive to push his own candidacy to extract U.S. aid. Such a mechanism would yield no information. With this strategy unavailable, each player must decide how to cast a vote for the others. Voting mechanisms have multiple equilibria but let us look at a “natural” one where a player conditions on the event that his vote is decisive (i.e. his vote can send the collective decision one way or the other). In this scenario, each player must decide how the allocation of U.S. aid to the player he votes for feeds back to him. Therefore, he will vote for the player who will use the money to take an action that most helps him, the voter. If fighting Soviets is such an action, he will vote for the strongest player. If instead he is worried that the money will be used to buy weapons and soldiers to attack other warlords, he will vote for the weakest warlord.
So, Gul’s mechanism does aggregate information in some circumstances even if, as Wright intimates, Gul is simply supporting a fellow Pashtun.
Here is a problem at has been in the back of my mind for a long time. What is the second best dominant-strategy mechanism (DSIC) in a market setting?
For some background, start with the bilateral trade problem of Myerson-Satterthwaite. We know that among all DSIC, budget-balanced mechanisms the most efficient is a fixed-price mechanism. That is, a price is fixed ex ante and the buyer and seller simply announce whether they are willing to trade at that price. Trade occurs if and only if both are willing and if so the buyer pays the fixed price to the seller. This is Hagerty and Rogerson.
Now suppose there are two buyers and two sellers. How would a fixed-price mechanism work? We fix a price p. Buyers announce their values and sellers announce their costs. We first see if there are any trades that can be made at the fixed price p. If both buyers have values above p and both sellers have values below then both units trade at price p. If two buyers have values above p and only one seller has value below p then one unit will be sold: the buyers will compete in a second-price auction and the seller will receive p (there will be a budget surplus here.) Similarly if the sellers are on the long side they will compete to sell with the buyer paying p and again a surplus.
A fixed-price mechanism is no longer optimal. The reason is that we can now use competition among buyers and sellers and “price discovery.” A simple mechanism (but not the optimal one) is a double auction. The buyers play a second-price auction between themselves, the sellers play a second-price reverse auction between themselves. The winner of the two auctions have won the right to trade. They will trade if and only if the second highest buyer value (which is what the winning buyer will pay) exceeds the second-lowest seller value (which is what the winning seller will receive.) This ensures that there will be no deficit. There might be a surplus, which would have to be burned.
This mechanism is DSIC and never runs a deficit. It is not optimal however because it only sells one unit. But it has the viture of allowing the “price” to adjust based on “supply and demand.” Still, there is no welfare ranking between this mechanism and a fixed-price mechanism because a fixed price mechanism will sometimes trade two units (if the price was chosen fortuitously) and sometimes trade no units (if the price turned out too high or low) even though the price discovery mechanism would have traded one.
But here is a mechanism that dominates both. It’s a hybrid of the two. We fix a price p and we interleave the rules of the fixed-price mechanism and the double auction in the following order
- First check if we can clear two trades at price p. If so, do it and we are done.
- If not, then check if we can sell one unit by the double auction rules. If so, do it and we are done.
- Finally, if no trades were executed using the previous two steps then return to the fixed-price and see if we can execute a single trade using it.
I believe this mechanism is DSIC (exercise for the reader, the order of execution is crucial!). It never runs a deficit and it generates more trade than either standalone mechanism: fixed-price or double auction.
Very interesting research question: is this a second-best mechanism? If not, what is? If so, how do you generalize it to markets with an arbitrary number of buyers and sellers?
A buyer and a seller negotiating a sale price. The buyer has some privately known value and the seller has some privately known cost and with positive probability there are gains from trade but with positive probability the seller’s cost exceeds the buyers value. (So this is the Myerson-Satterthwaite setup.)
Do three treatments.
- The experimenter fixes a price in advance and the buyer and seller can only accept or reject that price. Trade occurs if and only if they both accept.
- The seller makes a take it or leave it offer.
- The parties can freely negotiate and they trade if and only if they agree on a price.
Theoretically there is no clear ranking of these three mechanisms in terms of their efficiency (the total gains from trade realized.) In practice the first mechanism clearly sacrifices some efficiency in return for simplicity and transparency. If the price is set right the first mechanism would outperform the second in terms of efficiency due to a basic market power effect. In principle the third treatment could allow the parties to find the most efficient mechanism, but it would also allow them to negotiate their way to something highly inefficient.
A conjecture would be that with a well-chosen price the first mechanism would be the most efficient in practice. That would be an interesting finding.
A variation would be to do something similar but in a public goods setting. We would again compare simple but rigid mechanisms with mechanisms that allow for more strategic behavior. For example, a version of mechanism #1 would be one in which each individual was asked to contribute an equal share of the cost and the project succeeds if and only if all agree to their contributions. Mechanism #3 would allow arbitrary negotation with the only requirement be that the total contribution exceeds the cost of the project.
In the public goods setting I would conjecture that the opposite force is at work. The scope for additional strategizing (seeding, cajoling, guilt-tripping, etc) would improve efficiency.
Anybody know if anything like these experiments have been done?
Kobe Bryant was recently fined $100,000 for making a homophobic comment to a referee. Ryan O’Hanlon writing for The Good Men Project blog puts it into perspective:
- It’s half as bad as conducting improper pre-draft workouts.
- It’s twice as bad as saying you want to leave the NBA and go home.
- It’s just as bad as talking about the collective bargaining agreement.
- It’s twice as bad as saying one of your players used to smoke too much weed.
- It’s just as bad as writing a letter in Comic Sans about a former player.
- It’s just as bad as saying you want to sign the best player in the NBA.
- It’s four times as bad as throwing a towel to distract a guy when he’s shooting free throws.
- It’s four times as bad as kicking a water bottle.
- It’s 10 times as bad as standing in front of your bench for an extended period of time.
- It’s 10 times as bad as pretending to be shot by a guy who once brought a gun into a locker room.
- It’s 13.33 times as bad as tweeting during a game.
- It’s five times as bad as throwing a ball into the stands.
- It’s four times as bad as throwing a towel into the stands.
- It’s twice as bad as lying about smelling like weed and having women in a hotel room during the rookie orientation program.
- It’s one-fifth as bad as snowboarding.
That’s based on a comparison of the fines that the various misdeeds earned. The “n times as bad” is the natural interpretation of the fines since we are used to thinking of penalties as being chosen to fit the crime. But NBA justice needn’t conform to our usual intuitions because this is an employer/employee relationship governed by actual contract, not just social contract. We could try to think of these fines as part of the solution to a moral hazard problem. Independent of how “bad” the behaviors are, there are some that the NBA wants to discourage and fines are chosen in order to get the incentives right.
But that’s a problematic interpretation too. From the moral hazard perspective the optimal fine for many of these would be infinite. Any finite fine is essentially a license to behave badly as long as the player has a strong enough desire to do so. Strong enough to outweigh the cost of the fine. You can’t throw a towel to distract a guy when he’s shooting free throws unless its so important to you that you are willing to pay $250,000 for the privilege.
You can rescue moral hazard as an explanation in some cases because if there is imperfect monitoring then the optimal fine will have to be finite. Because with imperfect monitoring the fine cannot be a perfect deterrent. For example it may not possible to detect with certainty that you were lying about smelling like weed and having women in a hotel room during the rookie orientation program. If so then the false positives will have to be penalized. And when the fine will be paid with positive probability even with players on their best behavior you are now trading off incentives vs. risk exposure.
But the imperfect monitoring story can’t explain why Comic Sans doesn’t get an infinite fine, purifying the game of that transgression once and for all. Or tweeting, or snowboarding or most of the others as well.
It could be that the NBA knows that egregious fines can be contested in court or trigger some other labor dispute. This would effectively put a cap on fines at just the level where it is not worth the player’s time and effort to dispute it. But that doesn’t explain why the fines are not all pegged at that cap. It could be that the likelihood that a fine of a given magnitude survives such a challenge depends on the public perception of the crime . That could explain some of the differences but not many. Why is the fine for saying you want to leave the NBA larger than the fine for throwing a ball into the stands?
Once we’ve dispensed with those theories it just might be that the NBA recognizes that players simply want to behave badly sometimes. Without that outlet something else is going to give. Poor performance perhaps or just an eventual Dennis Rodman. The NBA understands that a fine is a price. And with the players having so many ways of acting out to choose from, the NBA can use relative prices to steer them to the efficient frontier. Instead of kicking a water bottle, why not get your frustrations out by sending 3 1/2 tweets during the game? Instead of saying that one of your players smokes too much weed, go ahead and indulge your urge to stand out in front of the bench for an extended period of time. You can do it for 5 times as long as the last guy or even stand 5 times farther out.
Not surprisingly, all of these choices start to look like real bargains compared to snowboarding and impoper pre-draft workouts.
The opening gambit of the book is surprisingly simple: If you were sentenced to five years in prison but had the option of receiving lashes instead, what would you choose? You would probably pick flogging. Wouldn’t we all?
I propose we give convicts the choice of the lash at the rate of two lashes per year of incarceration. One cannot reasonably argue that merely offering this choice is somehow cruel, especially when the status quo of incarceration remains an option. Prison means losing a part of your life and everything you care for. Compared with this, flogging is just a few very painful strokes on the backside. And it’s over in a few minutes. Often, and often very quickly, those who said flogging is too cruel to even consider suddenly say that flogging isn’t cruel enough. Personally, I believe that literally ripping skin from the human body is cruel. Even Singapore limits the lash to 24 strokes out of concern for the criminal’s survival. Now, flogging may be too harsh, or it may be too soft, but it really can’t be both.
The article is an excellent example of how considering an alternative (flogging replacing prison) which despite being non-serious still makes you think about the status quo in a new way.
If we could calibrate the number of lashes so as to create an equal disincentive but at a tiny fraction of the cost that should be a Pareto improvement right? Somehow that doesn’t seem right. I think the thought experiment reveals that one important part of incarceration is just to prevent the criminal from committing more crimes.
If N lashes is just as unpleasant as 1 year in prison what exactly does that mean? It says that N lashes plus whatever I decide to do during the next year is just as unpleasant as being shut in for a year. It will quite often be that the pivotal comparison is between prison and N lashes plus another year worth of crime. In that case we certainly don’t have a Pareto improvement.
(hoodhi: The Browser.)
This is the third and final post on ticket pricing motivated by the new restaurant Next in Chicago and proprietors Grant Achatz and Nick Kokonas new ticket policy. In the previous two installments I tried to use standard mechanism design theory to see what comes out when you feed in some non-standard pricing motives having to do with enhancing “consumer value.” The two attempts that most naturally come to mind yielded insights but not a useful pricing system. Today the third time is the charm.
Things start to come in to place when we pay close attention to this part of Nick’s comment to us:
we never want to invert the value proposition so that customers are paying a premium that is disproportionate to the amount of food / quality of service they receive.
I propose to formalize this as follows. From the restaurant’s point of view, consumer surplus is valuable but some consumers are prepared to bid even more than the true value of the service they will get. The restaurant doesn’t count these skyscraping bids as actually reflecting consumer surplus and they don’t want to tailor their mechanism to cater to them. In particular, the restaurant distinguishes willingness to pay from “value.”
I can think of a number of sensible reasons they would take this view. They might know that many patrons overestimate the value of a seating at Next. Indeed the restaurant might worry that high prices by themselves artificially inflate willingness to pay. They don’t want a bubble. And they worry about their reputation if someone pays $1700 for a ticket, gets only $1000 worth of value and publicly gripes. Finally they might just honestly believe that willingness to pay is a poor measure of welfare especially when comparing high versus low.
Whatever the reason, let’s run with it. Let’s define to be the value, as the restaurant perceives it, that would be realized by service to a patron whose willingness to pay is . One natural example would be
where is some prespecified “cap.” It would be like saying that nobody, no matter how much they say they are willing to pay, really gets a value larger than, say from eating at Next.
Now let’s consider the optimal pricing mechanism for a restaurant that maximizes a weighted sum of profit and consumer’s surplus, where now consumer’s surplus is measured as the difference between and whatever price is paid. The weight on profit is and the weight on consumer surplus is . After you integrate by parts you now get the following formula for virtual surplus.
And now we have something! Because if is between and then the first term is increasing in (up to the cap ) and the second term is decreasing. For close enough to , the overall virtual surplus is going to be first increasing and then decreasing. And that means that the optimal mechanism is something new. When bids are in the low to moderate range, you use an auction to decide who gets served. But above some level, high bidders don’t get any further advantage and they are all lumped together.
The optimal mechanism is a hybrid between an auction and a lottery. It has no reserve price (over and above the cost of service) so there are never empty seats. It earns profits but eschews exorbitant prices.
It has clear advantages over a fixed price. A fixed price is a blunt instrument that has to serve two conflicting purposes. It has to be high enough to earn sufficient revenue on dates when demand is high enough to support it, but it can’t be too high that it leads to empty seats on dates when demand is lower. An auction with rationing at the top is flexible enough to deal with both tasks independently. When demand is high the fixed price (and rationing) is in effect. When demand is low the auction takes care of adjusting the price downward to keep the restaurant full. The revenue-enhancing effects of low prices is an under-appreciated benefit of an auction. Finally, it’s an efficient allocation system for the middle range of prices so scalping motivations are reduced compared to a fixed price.
Incentives for scalping are not eliminated altogether because of the rationing at the top. This can be dealt with by controlling the resale market. Indeed here is one clear message that comes out of all of this. Whatever motivation the restaurant has for rationing sales, it is never optimal to allow unfettered resale of tickets. That only undermines what you were trying to achieve. Now Grant Achatz and Nick Kokonas understand that but they are forced to condone the Craigslist market because by law non-refundable tickets must be freely transferrable.
But the cure is worse than the disease. In fact refundable tickets are your friend. The reason someone wants to return their ticket for a refund is that their willingness to pay has dropped below the price. But there is somebody else with a willingness to pay that is above the price. We know this for sure because tickets are being rationed at that price. Granting the refund allows the restaurant to immediately re-sell it to the next guy waiting in line. Indeed, a hosted resale market would enable the restaurant to ensure that such transactions take place instantaneously through an automated system according to the same terms under which tickets were originally sold.
Someone ought to try this.
Restaurants, touring musicians, and sports franchises are not out to gouge every last penny out of their patrons. They want patrons to enjoy their craft but also to come away feeling like they didn’t pay an arm and a leg. Yesterday I tried to formalize this motivation as maximizing consumer surplus but that didn’t give a useful answer. Maximizing consumer surplus means either complete rationing (and zero profit) or going all the way to an auction (a more general argument why appears below.) So today I will try something different.
Presumably the restaurant cares about profits too. So it makes sense to study the mechanism that maximizes a weighted sum of profits and consumer’s surplus. We can do that. Standard optimal mechanism design proceeds by a sequence of mathematical tricks to derive a measure of a consumer’s value called virtual surplus. Virtual surplus allows you to treat any selling mechanism you can imagine as if it worked like this
- Consumers submit “bids”
- Based on the bids received the seller computes the virtual surplus of each consumer.
- The consumer with the highest virtual surplus is served.
If you write down the optimal mechanism design problem where the seller puts weight on profits and weight on consumer surplus, and you do all the integration by parts, you get this formula for virtual surplus.
where is the consumer’s willingness to pay, is the proportion of consumers with willingness to pay less than and is the corresponding probability density function. That last ratio is called the (inverse) hazard rate.
As usual, just staring down this formula tells you just about everything you want to know about how to design the pricing system. One very important thing to know is what to do when virtual surplus is a decreasing function of . If we have a decreasing virtual surplus then we learn that it’s at least as important to serve the low valuation buyers as those with high valuations (see point 3 above.)
But here’s a key observation: its impossible to sell to low valuation buyers and not also to high valuation buyers because whatever price the former will agree to pay the latter will pay too. So a decreasing virtual surplus means that you do the next best thing: you treat high and low value types the same. This is how rationing becomes part of an optimal mechanism.
For example, suppose the weight on profit is equal to . That brings us back to yesterday’s problem of just maximizing consumer surplus. And our formula now tells us why complete rationing is optimal because it tells us that virtual surplus is just equal to the hazard rate which is typically monotonically decreasing. Intuitively here’s what the virtual surplus is telling us when we are trying to maximize consumer surplus. If we are faced with two bidders and one has a higher valuation than the other, then to try to discriminate would require that we set a price in between the two. That’s too costly for us because it would cut into the consumer surplus of the eventual winner.
So that’s how we get the answer I discussed yesterday. Before going on I would like to elaborate on yesterday’s post based on correspondence I had with a few commenters, especially David Miller and Kane Sweeney. Their comments highlight two assumptions that are used to get the rationing conclusion: monotone hazard rate, and no payments to non-buyers. It gets a little more technical than usual so I am going to put it here in an addendum to yesterday (scroll down for the addendum.)
Now back to the general case we are looking at today, we can consider other values of
An important benchmark case is when virtual surplus reduces to just , now monotonically increasing. That says that a seller who puts equal weight on profits and consumer surplus will always allocate to the highest bidder because his virtual surplus is higher. An auction does the job, in fact a second price auction is optimal. The seller is implementing the efficient outcome.
More interesting is when is between and . In general then the shape of the virtual surplus will depend on the distribution , but the general tendency will be toward either complete rationing or an efficient auction. To illustrate, suppose that willingness to pay is distributed uniformly from to . Then virtual suplus reduces to
which is either decreasing over the whole range of (when ), implying complete rationing or increasing over the whole range (when ), prescribing an auction.
Finally when virtual surplus is the difference between an increasing function and a decreasing function and so it is increasing over the whole range and this means that an auction is optimal (now typically with a reserve price above cost so that in return for higher profits the restaurant lives with empty tables and inefficiency. This is not something any restaurant would choose if it can at all avoid it.)
What do we conclude from this? Maximizing a weighted sum of consumer surplus and profit yields again yields one of two possible mechanisms: complete rationing or an auction. Neither of these mechanisms seem to fit what Nick Kokonas was looking for in his comment to us and so we have to go back to the drawing board again.
Tomorrow I will take a closer look and extract a more refined version of Nick’s objective that will in fact produce a new kind of mechanism that may just fit the bill.
Last week, in response to our proposal for how to run a ticket market, Nick Kokonas of Next Restaurant wrote something interesting.
Simply, we never want to invert the value proposition so that customers are paying a premium that is disproportionate to the amount of food / quality of service they receive. Right now we have it as a great bargain for those who can buy tickets. Ideally, we keep it a great value and stay full.
Economists are not used to that kind of thinking and certainly not accustomed to putting such objectives into our models, but we should. Many sellers share Nick’s view and the economist’s job is to show the best way to achieve a principal’s objective, whatever it may be. We certainly have the tools to do it.
Here’s an interesting observation to start with. Suppose that we interpret Nick as wanting to maximize consumer surplus. What pricing mechanism does that? A fixed price has the advantage of giving high consumer surplus when willingness to pay is high. The key disadvantage is rationing: a fixed price has no way of ensuring that the guy with a high value and therefore high consumer surplus gets served ahead of a guy with a low value.
By contrast an auction always serves the guy with the highest value and that translates to higher consumer surplus at any given price. But the competition of an auction will lead to higher prices. So which effect dominates?
Here’s a little example. Suppose you have two bidders and each has a willingness to pay that is distributed according the uniform distribution on the interval . Let’s net out the cost of service and hence take that to be zero.
If you use a rationing system, each bidder has a 50-50 chance of winning and paying nothing (i.e. paying the cost of service.) So a bidder whose value for service is will have expected consumer surplus equal to .
If instead you use an auction, what happens? First, the highest bidder will win so that a bidder with value wins with probability . (That’s just the probability that his opponent had a lower value.) For bidders with high values that is going to be higher than the 50-50 probability from the rationing system. That’s the benefit of an auction.
However he is going to have to pay for it and his expected payment is . (The simplest way to see this is to consider a second-price auction where he pays his opponent’s bid. His opponent has a dominant strategy to bid his value, and with the uniform distribution that value will be on average conditional on being below .) So his consumer surplus is only
because when he wins his surplus is his value minus his expected payment , and he wins with probability .
So in this example we see that, from the point of view of consumer surplus, the benefits of the efficiency of an auction are more than offset by the cost of higher prices. But this is just one example and an auction is just one of many ways we could think of improving upon rationing.
However, it turns out that the best mechanism for maximizing consumer surplus is always complete rationing (I will prove this as a part of a more general demonstration tomorrow.) Set price equal to marginal cost and use a lottery (or a queue) to allocate among those willing to pay the price. (I assume that the restaurant is not going to just give away money.)
What this tells us is that maximizing consumer surplus can’t be what Nick Kokonas wants. Because with the consumer surplus maximizing mechanism, the restaurant just breaks even. And in this analysis we are leaving out all of the usual problems with rationing such as scalping, encouraging bidders with near-zero willingness to pay to submit bids, etc.
So tomorrow I will take a second stab at the question in search of a good theory of pricing that takes into account the “value proposition” motivation.
Addendum: I received comments from David Miller and Kane Sweeney that will allow me to elaborate on some details. It gets a little more technical than the rest of these posts so you might want to skip over this if you are not acquainted with the theory.
David Miller reminded me of a very interesting paper by Ran Shao and Lin Zhou. (See also this related paper by the same authors.) They demonstrate a mechanism that achieves a higher consumer surplus than the complete rationing mechanism and indeed that achieves the highest consumer surplus among all dominant-strategy, individually rational mechanisms.
Before going into the details of their mechanism let me point out the difference between the question I am posing and the one they answer. In formal terms I am imposing an additional constraint, namely that the restaurant will not give money to any consumer who does not obtain a ticket. The restaurant can give tickets away but it won’t write a check to those not lucky enough to get freebies. This is the right restriction for the restaurant application for two reasons. First if the restaurant wants to maximize consumer surplus its because it wants to make people happy about the food they eat, not happy about walking away with no food but a payday. Second, as a practical matter a mechanism that gives away money is just going to attract non-serious bidders who are looking for a handout.
In fact Shao and Zhou are starting from a related but conceptually different motivation: the classical problem of bilateral trade between two agents. In the most natural interpretation of their model the two bidders are really two agents negotiating the sale of an object that one of them already owns. Then it makes sense for one of the agents to walk away with no “ticket” but a paycheck. It means that he sold the object to the other guy.
Ok with all that background here is their mechanism in its simplest form. Agent 1 is provisionally allocated the ticket (so he becomes the seller in the bilateral negotiation.) Agent 2 is given the option to buy from agent 1 at a fixed price. If his value is above that price he buys and pays to agent 1. Otherwise agent 1 keeps the ticket and no money changes hands. (David in his comment described a symmetric version of the mechanism which you can think of as representing a random choice of who will be provisionally allocated the ticket. In our correspondence we figured out that the payment scheme for the symmetric version should be a little different, it’s an exercise to figure out how. But I didn’t let him edit his comment. Ha Ha Ha!!!)
In the uniform case the price should be set at 50 cents and this gives a total surplus of 5/8, outperforming complete rationing. Its instructive to understand how this is accomplished. As I pointed out, an auction takes away consumer surplus from high-valuation types. But in the Shao-Zhou framework there is an upside to this. Because the money extracted will be used to pay off the other agent, raising his consumer surplus. So you want to at least use some auction elements in the mechanism.
One common theme in my analysis and theirs is in fact a deep and under-appreciated result. You never want to “burn money.” Using an auction is worse than complete rationing because the screening benefits of pricing is outweighed by the surplus lost due to the payments to the seller. Using the Shao-Zhou mechanism is optimal precisely becuase it finds a clever way to redirect those payments so no money is burned. By the way this is also an important theme in David Miller’s work on dynamic mechanisms. See here and here.
Finally, we can verify that the Shao-Zhou mechanism would no longer be optimal if we adapted it to satisfy the constraint that the loser doesnt receive any money. It’s easy to do this based on the revenue equivalence theorem. In the Shao-Zhou mechanism an agent with zero value gets expected utility equal to 1/8 due to the payments he receives. We can subtract utility of 1/8 from all types and obtain an incentive-compatible mechanism with the same allocation rule. This would be just enough to satisfy my constraint. And then the total surplus will be 5/8-2/8 = 3/8 which is less than the 1/2 of the complete rationing mechanism. That’s another expression of the losses associated with using even the very crude screening in the Shao-Zhou mechanism.
Next let me tell you about my correspondence with Kane Sweeney. He constructed a simple example where an auction outperforms rationing. It works like this. Suppose that each bidder either had a very low willingness to pay, say 50 cents, or a very high willingness to pay, say $1,000. If you ration then expected surplus is about $500. Instead you could do the following. Run a second-price auction with the following modification to the rules. If both bid $1000 then toss a coin and give the ticket to the winner at a price of $1. This mechanism gives an expected surplus of about $750.
Basically this type of example shows that the monotone hazard rate assumption is important for the superiority of rationing. To see this, suppose that we smooth out the distribution of values so that types between 50 cents and $1000 have very small positive probability. Then the hazard rate is first increasing around 50 cents and then decreasing from 50 cents all the way to $1000. So you want to pool all the types above 50 cents but you want to screen out the 50-cent types. That’s what Kane’s mechanism is doing.
I would interpret Kane’s mechanism as delivering a slightly nuanced version of the rationing message. You want to screen out the non-serious bidders but ration among all of the serious bidders.
A former academic economist and game theorist is now the Chief Economic Advisor in the Ministry of Finance in India. His name is Kaushik Basu. Via MR, here is a policy paper he has just written advising that the giving of bribes should be de-criminalized.
The paper puts forward a small but novel idea of how we can cut down the incidence of bribery. There are different kinds of bribes and what this paper is concerned with are bribes that people often have to give to get what they are legally entitled to. I shall call these ―harassment bribes.‖ Suppose an income tax refund is held back from a taxpayer till he pays some cash to the officer. Suppose government allots subsidized land to a person but when the person goes to get her paperwork done and receive documents for this land, she is asked to pay a hefty bribe. These are all illustrations of harassment bribes. Harassment bribery is widespread in India and it plays a large role in breeding inefficiency and has a corrosive effect on civil society. The central message of this paper is that we should declare the act of giving a bribe in all such cases as legitimate activity. In other words the giver of a harassment bribe should have full immunity from any punitive action by the state.
This is not just crazy talk, there is some logic behind it fleshed out in the paper. If giving a bribe is forgiven but demanding a bribe remains a crime, then citizens forced to pay bribes for routine government services will have an incentive to report the bribe to the authorities. This will discourage harrassment bribery.
The obvious question is whether the bribe-enforcement authority will itself demand bribes. To whom does a citizen report having given a bribe to the bribe authority? At some point there is a highest bribe authority and it can demand bribes with impunity. With that power they can extract all of the reporter’s gains by demanding it as a bribe.
Worse still they can demand an additional bribe from the original harasser in return for exonerating her. The effect is that the harasser sees only a fraction of the return on her bribe demands. This induces her to ask for even higher bribes. Higher bribes means fewer citizens are able to pay them and fewer citizens receive their due government services.
The bottom line is that in an economy run on bribes you want to make the bribes as efficient as possible. That may mean encouraging them rather than discouraging them.
There are a few basic features that Grant Achatz and Nick Kokonas should build into their online ticket sales. First, you want a good system to generate the initial allocation of tickets for a given date, second you want an efficient system for re-allocating tickets as the date approaches. Finally, you want to balance revenue maximization against the good vibe that comes from getting a ticket at a non-exorbitatnt price.
- Just like with the usual reservation system, you would open up ticket sales for, say August 1, 3 months in advance on May 1. It is important that the mechanism be transparent, but at the same time understated so that the business of selling tickets doesn’t draw attention away from the main attractions: the restuarant and the bar. The simple solution is to use a sealed bid N+1st price auction. Anyone wishing to buy a ticket for August 1 submits a bid. Only the restaurant sees the bid. The top 100 bidders get tickets and they pay a price equal to the 101st highest bid. Each bidder is informed whether he won or not and the final price. With this mechanism it is a dominant strategy to bid your true maximal willingness to pay so the auction is transparent, and all of the action takes place behind the scenes so the auction won’t be a spectacle distracting from the overall reputation of the restaurant.
- Next probably wants to allow patrons to buy at lower prices than what an auction would yield. That makes people feel better about the restaurant than if it was always trying to extract every last drop of consumer’s surplus. Its easy to work that into the mechanism. Decide that 50 out of 100 seats will be sold to people at a fixed price and the remainder will be sold by auction. The 50 lucky people will be chosen randomly from all of those whose bid was at least the fixed price. The division between fixed-price and auction quantities could easily be adjusted over time, for different days of the week, etc.
- The most interesting design issue is to manage re-allocation of tickets. This is potentially a big deal for a restaurant like Next because many people will be coming from out of town to eat there. Last-minute changes of plans could mean that rapid re-allocation of tickets will have a big impact on efficiency. More generally, a resale market raises the value of a ticket because it turns the ticket into an option. This increases the amount people are willing to bid for it. So Next should design an online resale market that maximizes the efficiency of the allocation mechanism because those efficiency gains not only benefit the patrons but they also pay off in terms of initial ticket sales.
- But again you want to minimize the spectacle. You don’t want Craigslist. Here is a simple transparent system that is again discreet. After the original allocation of tickets by auction, anyone who wishes to purchase a ticket for August 1 submits their bid to the system. In addition, anyone currently holding a ticket for August 1 has the option of submitting a resale price to the system. These bids are all kept secret internally in the system. At any moment in which the second highest bid exceeds the second lowest resale price offered, a transaction occurs. In that transaction the highest bidder buys the ticket and pays the second-highest bid. The seller who offered the lowest price sells his ticket and receives the second lowest price.
- That pricing rule has two effects. First, it makes it a dominant strategy for buyers to submit bids equal to their true willingness to pay and for sellers to set their true reserve prices. Second, it ensures that Next earns a positive profit from every sale equal to the difference between the second-highest bid and the second-lowest resale price. In fact it can be shown that this is the system that maximizes the efficiency of the market subject to the constraint the market is transparent (i.e. dominant strategies) and that Next does not lose money from the resale market.
- The system can easily be fine-tuned to give Next an even larger cut of the transactions gains, but a basic lesson of this kind of market design is that Next should avoid any intervention of that sort. Any profits earned through brokering resale only reduces the efficiency of the resale market. If Next is taking a cut then a trade will only occur if the gains outweigh Next’s cut. Fewer trades means a less efficient resale market and that means that a ticket is a less flexible asset. The final result is that whatever profits are being squeezed out of the resale market are offset by reduced revenues from the original ticket auction.
- The one exception to the latter point is the people who managed to buy at the fixed price. If the goal was to give those people the gift of being able to eat at Next for an affordable price and not to give them the gift of being able to resell to high rollers, then you would offer them only the option to sell back their ticket at the original price (with Next either selling it again at the fixed price or at the auction price, pocketing the spread.) This removes the incentive for “scalpers” to flood the ticket queue, something that is likely to be a big problem for the system currently being used.
- A huge benefit of a system like this is that it makes maximal use of information about patrons’ willingness to pay and with minimal effort. Compare this to a system where Next tries to gauge buyer demand over time and set the market clearing price. First of all, setting prices is guesswork. An auction figures out the price for you. Second, when you set prices you learn very little about demand. You learn only that so many people were willing to pay more than the price. You never find out how much more than that price people would have been willing to pay. A sealed bid auction immediately gives you data on everybody’s willingness to pay. And at every moment in time. That’s very valuable information.
Suppose I want you to believe something and after hearing what I say you can, at some cost, check whether I am telling the truth. When will you take my word for it and when will you investigate?
If you believe that I am someone who always tells the truth you will never spend the cost to verify. But then I will always lie (whenever necessary.) So you must assign some minimal probability to the event that I am a liar in order to have an incentive to investigate and keep me in check.
Now suppose I have different ways to frame my arguments. I can use plain language or I can cloak them in the appearance of credibility by using sophisticated jargon. If you lend credibility to jargon that sounds smart, then other things equal you have less incentive to spend the effort to verify what I say. That means that jargon-laden statements must be even more likely to be lies in order to restore the balance.
(Hence, statistics come after “damned lies” in the hierarchy.)
Finally, suppose that I am talking to the masses. Any one of you can privately verify my arguments. But now you have a second, less perfect way of checking. If you look around and see that a lot of other people believe me, then my statements are more credible. That’s because if other people are checking me and many of them demonstrate with their allegiance that they believe me, it reveals that my statements checked out with those that investigated.
Other things equal, this makes my statements more credible to you ex ante and lowers your incentives to do the investigating. But that’s true of everyone so there will be a lot of free-riding and too little investigating. Statements made to the masses must be even more likely to be lies to overcome that effect.
How do you get deadbeat dads to pay child support? You threaten them with incarceration if they don’t pay. But if the punishment has its intended effect you will find that the only deadbeats who actually receive the punishment are those for whom the punishment is pointless because they don’t have the money to pay. They are the turnips.
“Deadbeats,” according to Sorensen, are parents who could pay but choose not to. “Turnips” — invoking the phrase, “You can’t get blood out of a turnip” — are parents who don’t have the money to pay. So what percentage of nonpaying parents are deadbeats and what percentage are turnips? Sorenson says most of those who end up in jail are low-income, and thus, “more likely to be a turnip than a deadbeat.”
Is that a bug or a feature? That’s part of what the Supreme Court will decide in a case that was argued last week.
This is an easy one: North Korea thinks (1) the US is out to exploit and steal resources from other countries and hence (2) Libya was foolish to giving away its main weapon, its nascent nuclear arsenal, which acted as a deterrent to American ambition. Accordingly,
“The truth that one should have power to defend peace has been confirmed once again,” the [North Korean] spokesperson was quoted as saying, as he accused the U.S. of having removed nuclear arms capabilities from Libya through negotiations as a precursor to invasion.
“The Libyan crisis is teaching the international community a grave lesson,” the spokesperson was quoted as saying, heaping praise on North Korea’s songun, or military-first, policy.
In a perceptive analysis, Professor Ruediger Franks adds two more examples that inform North Korean doctrine. Gorbachev’s attempts to modernize the Soviet Union led to its collapse and the emancipation of its satellite states. Saddam’s agreement to allow a no-fly zone after Gulf War I led inexorably to Gulf War II and his demise. The lesson: Get more nuclear arms and do not accede to any US demands.
Is there a solution that eliminates nuclear proliferation? Such a solution would have to convince North Korea that their real and perceived enemies are no more likely to attack even if they know North Korea does not have a nuclear deterrent. Most importantly, the US would have to eliminate North Korean fear of American aggression. In a hypothetical future where the North Korean regime has given up its nuclear arsenal, suppose the poor, half-starved citizens of North Korea stage a strike and mini-revolt for food and shelter and the regime strikes back with violence. Can it be guaranteed that South Korea does not get involved? Can it be guaranteed that Samantha Power does not urge intervention to President Obama in his second term or Bill Kristol to President Romney in his first? No. So, we are stuck with nuclear proliferation by North Korea. The only question is whether North Korea can feel secure with a small arsenal.
Tomas Sjostrom and I offer one option for reducing proliferation in our JPE paper Strategic Ambiguity and Arms Proliferation. If North Korea can keep the size and maturity of its nuclear arsenal hidden, we can but guess at its size and power. It might be large or quite small – who knows. This means even if the arsenal is actually small, North Korea can still pretend it is big and get some of the deterrent power of a large arsenal without actually having it. The potential to bluff afforded by ambiguity of the size of weapons stockpiles affords strategic power to North Korea. It reduces North Korea’s incentive to proliferate. And this in turn can help the U.S. particularly if they do not really want to attack North Korea but fear nuclear proliferation. Unlike poker and workplace posturing à la Dilbert, nuclear proliferation is not a zero-sum game. Giving an opponent the room to bluff can actually create a feedback loop that helps other players.
Grading still hangs over me but teaching is done. So, I finally had time to read Kiyotaki Moore. It’s been on my pile of papers to read for many, many years. But it rose to the top because, first, my PhD teaching allowed me to finally get to Myerson’s bargaining chapter in his textbook and Abreu-Gul’s bargaining with commitment model and, second, because Eric Maskin recommends it as one of his key papers for understanding the financial crisis. So, some papers in my queue were cleared out and Kiyotaki-Moore leaped over several others.
I see why the paper has over 2000 cites on Google Scholar.
The main propagation mechanism in the model relies on the idea that credit-constrained borrowers borrow against collateral. The greater the value of collateral “land” , the greater the amount they can borrow. So, if for some reason next period’s price of land is high, the greater is the amount the borrower can borrow against his land this period. Suppose there is an unexpected positive shock to the productivity of land. This increases the value of land and hence its price. This capital gain increases borrowing. An increase in the value of land increases economic activity. It also increases demand for land and hence the price of land. This can choke off some demand for land. The more elastic the supply of land, the smaller is the latter dampening effect. So there can be a significant multiplier to a positive shock to technology.
(Why are borrowers constrained in their borrowing by the value of their land and rather than the NPV of their projects? Kiyotaki-Moore rely on a model of debt of Hart and Moore to justify this constraint. While Hart-Moore is also in my pile, I did not finally have time to read it. I did note they have an extremely long Appendix to justify the connection between collateral and borrowing! The main idea in Hart Moore is that an entrepreneur can always walk away from a project and hold it up. As his human capital is vital for the project’s success, he will be wooed back in renegotiation. The Appendix must argue that he captures all the surplus above the liquidation value of the land. Hence, the lender will only be willing to lend up to value of collateral to avoid hold up.)
But how do we get credit cycles? As the price of land rises, the entrepreneurs acquire more land. This increases the price of land. They also accumulate debt. The debt constrains their ability to borrow and eventually demand for land declines and its price falls. A cycle. Notice that this cycle is not generated by shocks to technology or preferences but arises endogenously as land and debt holdings vary over time! I gotta think about this part more….
My daughter’s 4th grade class read The Emperor’s New Clothes (a two minute read) and today I led a discussion of the story. Here are my notes.
The Emperor, who was always to be found in his dressing room, commissioned some new clothes from weavers who claimed to have a magical cloth whose fine colors and patterns would be “invisible to anyone who was unfit for his office, or who was unusually stupid.”
Fast forward to the end of the story. Many of the Emperor’s most trusted advisors have, one by one, inspected the clothes and faced the same dilemma. Each of them could see nothing and yet for fear of being branded stupid or unfit for office each bestowed upon the weavers the most elaborate compliments they could muster. Finally the Emperor himself is presented with his new clothes and he is shocked to discover that they are invisible only to him.
Am I a fool? Am I unfit to be the Emperor? What a thing to happen to me of all people! – Oh! It’s very pretty,” he said. “It has my highest approval.” And he nodded approbation at the empty loom. Nothing could make him say that he couldn’t see anything.
The weavers have succesfully engineered a herd. For any inspector who doubts the clothes’ authenticity, to be honest and dispel the myth requires him to convince the Emperor that the clothes are invisible to everybody. That is risky because if the Emperor believes the clothes are authentic (either because he sees them or he thinks he is the only one who does not) then the inspector would be judged unfit for office. With each successive inspector who declares the clothes to be authentic the evidence mounts, making the risk to the next inspector even greater. After a long enough sequence no inspector will dare to deviate from the herd, including the Emperor himself.
The clothes and the herd are a metaphor for authority itself. Respect for authority is sustained only because others‘ respect for authority is thought to be sufficiently strong to support the ouster of any who would question it.
But whose authority? The deeper lesson of the story is a theory of the firm based on the separation of ownership and management. Notice that it is the weavers who capture the rents from the environment of mutual fear that they have created. They show that the optimal use of their asset is to clothe a figurehead in artificial authority and hold him in check by keeping even him in doubt of his own legitimacy. The herd bestows management authority on the figurehead but ensures that rents flow to the owners who are surreptitiously the true authorities.
The swindlers at once asked for more money, more silk and gold thread, to get on with the weaving. But it all went into their pockets. Not a thread went into the looms, though they worked at their weaving as hard as ever.
The story concludes with a cautionary note. The herd holds together only because of calculated, self-interested subjects. The organizational structure is vulnerable if new members are not trained to see the wisdom of following along.
“But he hasn’t got anything on,” a little child said.
“Did you ever hear such innocent prattle?” said its father. And one person whispered to another what the child had said, “He hasn’t anything on. A child says he hasn’t anything on.”
“But he hasn’t got anything on!” the whole town cried out at last.
Herds are fragile because knowledge is contagious. As the organization matured everyone secretly has come to know that the authority is fabricated. And later everyone comes to know that everyone has secretly come to know that. This latent higher-order knowledge requires only a seed of public knowledge before it crystalizes into common knowledge that the organization is just a mirage.
And after that, who is the last member to maintain faith in the organization?
The Emperor shivered, for he suspected they were right. But he thought, “This procession has got to go on.” So he walked more proudly than ever, as his noblemen held high the train that wasn’t there at all.
I am always surprised in Spring how suddenly there are cars parked on the residential streets in my town where just a month ago the streets were empty. These are narrow streets so a row of cars turns it into a one-lane street that supposed to handle two-way traffic. And that is when we have to solve the problem of who enters the narrowed section first when two cars are coming in opposite directions on the street.
On my street cars are only allowed to park on the North side. So if I am headed West I have to move to the oncoming traffic side to pass the row of parked cars. If I do that and the car coming in the opposite direction has to stop for just a second or two, the driver will be understanding (a quick royal wave on the way by helps!) But if she has to wait much longer than that she is not going to be happy. And indeed the convention on my street would have me stop and wait even if I arrive at the bottleneck first.
But of course, from an efficiency point of view it shouldn’t matter which side the cars are parked on. Total waiting time is minimized by a first-come first-served convention. And note that there aren’t even distributional conseqeuences because what goes West must go East eventually.
Still the payoff-irrelevant asymmetry seems to matter. For example, a driver headed West would never complain if he arrives second and is made to wait. And because of the strict efficiency gains this is not the same as New York on the left, London on the right. The perceived property right makes all the difference. And even I, who understands the efficiency argument, adhere to the convention.
Of course there is the matter of the gap. If the Westbound driver arrives just moments before the Eastbound driver then in fact he is forced to stop because at the other end he will be bottled in. There won’t be enough room for the Westbound driver to get through if the Eastbound driver has not stopped with enough of a gap.
And once you notice this you see that in fact the efficient convention is very difficult to maintain, especially when it’s a long row of cars. The efficient convention requires the Westbound driver to be able to judge the speed of the oncoming car as well as the current gap. And the reaction time of the Eastbound driver is an unobservable variable that will have to be factored in.
That ambiguity means that there is no scope for agreement on just how much of headstart the Eastbound driver should be afforded. Especially because if he is forced to back up, he will be annoyed with good reason. So for sure the second best will give some baseline headstart to the Eastbound driver.
Then there’s the moral hazard problem. You can close the gap faster by speeding up a bit on the approach. And even if you don’t speed up, any misjudgement of the gap raises the suspicion that you did speed up, bolstering the Eastbound driver’s gripe. Note that the moral hazard problem is not mitigated by a convention which gives a longer headstart to the Eastbound driver. No matter what the headstart is, in those cases where the headstart is binding the incentive to speed up is there.
All things considered, the property rights convention, while inefficient from a first-best point of view, may in fact be the efficient one when the informational asymmetry and moral hazard problems are taken into account.
An insightful analysis from John Quiggin at Crooked Timber of the organizational economics of Arab dictatorships.
The element of truth is that the Arab monarchies have good prospects of survival if they can manage the transition to constitutional monarchy. And it makes sense for them to do so. After all, a constitutional monarch gets to live, literally, like a king, without having to worry about boring stuff like budgets and foreign affairs. And, in the modern context, the risk that such a setup will be overthrown by a military coup, as happened to quite a few of the postcolonial constitutional monarchs, is much diminished. By contrast, there’s no such thing as a constitutional dictatorship or tyranny and no way to make the transition from President-for-Life to constitutional monarch. That’s not to say all the monarchs in the region will survive, or for that matter, that all the remaining dictatorships will fall. But the general point is valid enough.
With this corollary for Saudi Arabia
The other big problem is that this can’t easily be done in Saudi Arabia. There are not even the forms of a constitutional government to begin with. Worse, the state is not so much a monarchy as an aristocracy/oligarchy saddled with 7000 members of the House of Saud, and many more of the hangers-on that typify such states. These people have a lot to lose, and nothing to gain, from any move in the direction of democracy.
An important role of government is to provide public goods that cannot be provided via private markets. There are many ways to express this view theoretically, a famous one using modern theory is Mailath-Postlewaite. (Here is a simple exposition.) They consider a public good that potentially benefits many individuals and can be provided at a fixed per-capita cost C. (So this is a public good whose cost scales proportionally with the size of the population.)
Whatever institution is supposed to supply this public good faces the problem of determining whether the sum of all individual’s values exceeds the cost. But how do you find out individual’s values? Without government intervention the best you can do is ask them to put their money where their mouths are. But this turns out to be hopelessly inefficient. For example if everybody is expected to pay (at least) an equal share of the cost, then the good will produced only if every single individual has a willingness to pay of at least C. The probability that happens shrinks to zero exponentially fast as the population grows. And in fact you can’t do much better than have everyone pay an equal share.
Government can help because it has the power to tax. We don’t have to rely on voluntary contributions to raise enough to cover the costs of the good. (In the language of mechanism design, the government can violate individual rationality.) But compulsory contributions don’t amount to a free lunch: if you are forced to pay you have no incentive to truthfully express your true value for the public good. So government provision of public goods helps with one problem but exacerbates another. For example if the policy is to tax everyone then nobody gives reliable information about their value and the best government can do is to compare the cost with the expected total value. This policy is better than nothing but it will often be inefficient since the actual values may be very different.
But government can use hybrid schemes too. For example, we could pick a representative group in the population and have them make voluntary contributions to the public good, signaling their value. Then, if enough of them have signaled a high willingness to pay, we produce the good and tax everyone else an equal share of the residual cost. This way we get some information revelation but not so much that the Mailath Postlewaite conclusion kicks in.
Indeed it is possible to get very close to the ideal mechanism with an extreme version of this. You set aside a single individual and then ask everyone else to announce their value for the public good. If the total of these values exceeds the cost you produce the public good and then charge them their Vickrey-Clarke-Groves (VCG) tax. It is well known that these taxes provide incentives for truthful revelation but that the sum of these taxes will fall short of the cost of providing the public good. Here’s where government steps in. The singled-out agent will be forced to cover the budget shortfall.
Now obviously this is bad policy and is probably infeasible anyway since the poor guy may not be able to pay that much. But the basic idea can be used in a perfectly acceptable way. The idea was that by taxing an agent we lose the ability to make use of information about his value so we want to minimize the efficiency loss associated with that. Ideally we would like to find an individual or group of individuals who are completely indifferent about the public good and tax them. Since they are indifferent we don’t need their information so we lose nothing by loading all of the tax burden on them.
In fact there is always such a group and it is a very large group: everybody who is not yet born. Since they have no information about the value of a public good provided today they are the ideal budget balancers. Today’s generation uses the efficient VCG mechanism to decide whether to produce the good and future generations are taxed to make up any budget imbalance.
There are obviously other considerations that come into play here and this is an extreme example contrived to make a point. But let me be explicit about the point. Balanced budget requirements force today’s generation to internalize all of the costs of their decisions. It is ingrained in our senses that this is the efficient way to structure incentives. For if we don’t internalize the externalities imposed on subsequent generations we will make inefficient decisions. While that is certainly true on many dimensions, it is not a universal truth. In particular public goods cannot be provided efficiently unless we offload some of the costs to the next generation.
I was walking along, and I saw just this hell of a big moose turd, I mean it was a real steamer! So I said to myself, “self, we’re going to make us some moose turd pie.” So I tipped that prairie pastry on its side, got my sh*t together, so to speak, and started rolling it down towards the cook car: flolump, flolump, flolump. I went in and made a big pie shell, and then I tipped that meadow muffin into it, laid strips of dough across it, and put a sprig of parsley on top. It was beautiful, poetry on a plate, and I served it up for dessert.
Here’s one of the thorniest incentive problems known to man. In an organization there is a job that has to be done. And not just anybody can do it well, you really need to find the guy who is best at it. The livelihood of the organization depends on it. But the job is no fun and everyone would like to get out of doing it. To make matters worse, performance is so subjective that no contract can be written to compensate the designee for a job well done.
The core conflict is exemplified in a story by Utah Phillips about railroad workers living out in the field as they work to level the track. Someone has to do the cooking for the team and nobody wants to do it. Lacking any better incentive scheme they went by the rule that if you complained about the food then from now on you were going to have to do the cooking.
You can see the problem with this arrangement. But is there any better system? You want to find the best cook but the only way to reward him is to relieve him of the job. That would be self defeating even if you could get it to work. You probably couldn’t because who would be willing to say the food was good if it meant depriving themselves of it the next time?
A simple rotation scheme at least has the benefit of removing the perverse incentive. Then on those days when the best cook has the job we can trust that he will make a good meal out of his own self interest. He might even volunteer to be the cook.
But it might be optimal to rule out volunteering too. Because that could just bring back the original incentive problem in a new form. Since ex ante nobody knows who the best cook is, everyone will set out to prove that they are incapable of making a palatable meal so that the one guy who actually can cook, whoever he is, will volunteer.
It may help to keep the identity of the cook secret. Then when a capable cook actually has the job he can feel free to make a good meal without worrying that he will be recruited permanently. It will also lower the incentive for the others to make a bad meal because nobody will know who to exclude in the future.
Even if there is no scheme that really solves the incentive problem, the freedom to complain is essential for organizational morale.
Well, this big guy come into the mess car, I mean, he’s about 5 foot forty, and he sets himself down like a fool on a stool, picked up a fork and took a big bite of that moose turd pie. Well he threw down his fork and he let out a bellow, “My God, that’s moose turd pie!”
“It’s good though.”
Believe it or not that line of thinking does lie just below the surface in many recruiting discussions. The recruiting committee wants to hire good people but because the market moves quickly it has to make many simultaneous offers and runs the risk of having too many acceptances. There is very often a real feeling that it is safe to make offers to the top people who will come with low probability but that its a real risk to make an offer to someone for whom the competition is not as strong and who is therefore likely to accept.
This is not about adverse selection or the winner’s curse. Slot-constraint considerations appear at the stage where it has already been decided which candidates we like and all that is left is to decide which ones we should offer. Anybody who has been involved in recruiting decisions has had to grapple with this conundrum.
But it really is a phantom issue. It’s just not possible to construct a plausible model under which your willingness to make an offer to a candidate is decreasing in the probability she will come. Take any model in which there is a (possibly increasing) marginal cost of filling a slot and candidates are identified by their marginal value and the probability they would accept an offer.
Consider any portfolio of offers which involves making an offer to candidate F. The value of that portfolio is a linear function of the probability that F accepts the offer. For example, consider making offers to two candidates and . The value of this portfolio is
where and are the acceptance probabilities, and are the values and is the cost of hiring one or two candidates in total. This can be re-arranged to
where is the marginal cost of a second hire. If the bracketed expression is positive then you want to include in the portfolio and the value of doing so only gets larger as increases. (note to self: wordpress latex is whitespace-hating voodoo)
In particular, if is in the optimal portfolio, then that remains true when you raise .
It’s not to say that there aren’t interesting portfolio issues involved in this problem. One issue is that worse candidates can crowd out better ones. In the example, as the probability that accepts an offer, , increases you begin to drop others from the portfolio. Possibly even others who are better than .
For example, suppose that the department is slot-constrained and would incur the Dean’s wrath if it hired two people this year. If so that you prefer candidate , you will nevertheless make an offer only to if is very high.
In general, I guess that the optimal portfolio is a hard problem to solve. It reminds me of this paper by Hector Chade and Lones Smith. They study the problem of how many schools to apply to, but the analysis is related.
What is probably really going on when the titular quotation arises is that factions within the department disagree about the relative values of and . If is a theorist and a macro-economist, the macro-economists will foresee that a high means no offer for .
Another observation is that Deans should not use hard offer constraints but instead expose the department to the true marginal cost curve, understanding that the department will make these calculations and voluntarily ration offers on its own. (When is not too high, it is optimal to make offers to both and a hard offer constraint prevents that.)
The Texas legislature is on the verge of passing a law permitting concealed weapons on University campuses, including the University of Texas where just this Fall my co-author Marcin Peski was holed up in his office waiting out a student who was roaming campus with an assault rifle.
This post won’t come to any conclusions, but I will try to lay out the arguments as I see them. More guns, less crime requires two assumptions. First, people will carry guns to protect themselves and second, gun-related crime will be reduced as a result.
There are two reasons that crime will be reduced: crime pays off less often, and sometimes it leads to shooting. In a perfect world, a gun-toting victim of a crime simply brandishes his gun and the criminal walks away or is apprehended and nobody gets hurt. In that perfect world the decision to carry a gun is simple. If there is any crime at all you should carry a gun becuase there are no costs and only benefits. And then the decision of criminals is simple too: crime doesn’t pay because everyone is carrying a gun.
(In equilibrium we will have a tiny bit of crime, just enough to make sure everyone still has an incentive to carry their guns.)
But the world is not perfect like that and when a gun-carrying criminal picks on a gun-carrying victim, there is a chance that either of them will be shot. This changes the incentives. Now your decision to carry a gun is a trade-off between the chance of being shot versus the cost of being the victim of a crime. The people who will now choose to carry guns are those for whom the cost of being the victim of a crime outweigh the cost of an increased chance of getting shot.
If there are such people then there will be more guns. These additional guns will reduce crime because criminals don’t want to be shot either. In equilibrium there will be a marginal concealed-weapon carrier. He’s the guy who, given the level of crime, is just indifferent between being a victim of crime and having a chance of being shot. Everyone who is more willing to escape crime and/or more willing to face the risk of being shot will carry a gun. Everyone else will not.
In this equilibrium there are more guns and less crime. On the other hand there is no theoretical reason that this is a better outcome than no guns, more crime. Because this market has externalities: there will be more gun violence. Indeed the key endogenous variable is the probability of a shootout if you carry a gun and/or commit a crime. It must be high enough to deter crime.
And there may not be much effect on crime at all. Whose elasticity with respect to increased probability of being shot is larger, the victim or the criminal? Often the criminal has less to lose. To deter crime the probability of a shooting may have to increase by more than victims are willing to accept and they may choose not to carry guns.
There is also a free-rider problem. I would rather have you carry the gun than me. So deterrence is underprovided.
Finally, you might say that things are different for crimes like mugging versus crimes like random shootings. But really the qualitative effects are the same and the only potential difference is in terms of magnitudes. And it’s not obvious which way it goes. Are random assailants more or less likely to be deterred? As for the victims, on the one hand they have more to gain from carrying a gun when they are potentially faced with a campus shooter, but if they plan make use of their gun they also face a larger chance of getting shot.
NB: nobody shot at the guy at UT in September and the only person he shot was himself.
My daughter’s 4th grade class is reading a short story by O. Henry called The Two Thanksgiving Day Gentlemen. (A two minute read.) In about an hour I will go to her class and lead a discussion of the story. Here are my notes.
In the story we meet Stuffy Pete. He is sitting on a bench waiting for a second gentleman to arrive. We learn that this is an annual meeting on Thanksgiving day that Stuffy Pete is always looking forward to. Stuffy Pete is a ragged, hungry street-dweller and the gentleman who arrives each year treats him to a Thanksgiving feast.
But on this Thanksgiving, Stuffy Pete is stuffed. Because on his way to the meeting, he was stopped by the servant of two old ladies who had their own Thanksgiving tradition. They treated him to an even bigger feast than he is used to. And so he sits here, weighed down on the bench, terrified of the impending arrival.
The old gentleman arrives and recites this speech.
“Good morning, I am glad to see that the vicissitudes of another year have spared you to move in health about the beautiful world. For that blessing alone this day of thanksgiving is well celebrated. If you will come with me, my man, I will provide you with a dinner that should be more than satisfactory in every respect.”
The same speech he has recited every year the two gentlemen met on that same bench. ”The words themselves almost formed an institution.”
And Stuffy Pete, in tearful agony at the prospects replies “Thankee sir. I’ll go with ye, and much obliged. I’m very hungry sir.”
Stuffy’s Thanksgiving appetite was not his own; it now belonged to this kindly old gentleman who had taken possession of it.
The story’s deep cynicism, hinted at in the preceding quote, is only fully realized in the final paragraphs which contain the typical O. Henry ironic twist. Stuffy, overstuffed by a second Thanksgiving feast collapses and is brought to hospital by an ambulance whose driver “cursed softly at his weight.” Shortly thereafter he is joined there by the old gentleman and a doctor is overheard chatting about his case
“That nice old gentleman over there, now” he said “you wouldn’t think that was a case of almost starvation. Proud old family, I guess. He told me he hadn’t eaten a thing for three days.”
Social norms and institutions re-direct self-interested motives. Social welfare maximization is then proxied for by individual-level incentives. But they can take on a life of their own, uncoupled from their origin. This is the folk public choice theory of O. Henry’s staggeringly cynical fable.
By asking a hand-picked team of 3 or 4 experts in the field (the “peers”), journals hope to accept the good stuff, filter out the rubbish, and improve the not-quite-good-enough papers.
…Overall, they found a reliability coefficient (r^2) of 0.23, or 0.34 under a different statistical model. This is pretty low, given that 0 is random chance, while a perfect correlation would be 1.0. Using another measure of IRR, Cohen’s kappa, they found a reliability of 0.17. That means that peer reviewers only agreed on 17% more manuscripts than they would by chance alone.
That’s from neuroskeptic writing about an article that studies the peer-review process. I couldn’t tell you what Cohen’s kappa means but let’s just take the results at face value: referees disagree a lot. Is that bad news for peer-review?
Suppose that you are thinking about whether to go to a movie and you have three friends who have already seen it. You must choose in advance one or two of them to ask for a recommendation. Then after hearing their recommendation you will decide whether to see the movie.
You might decide to ask just one friend. If you do it will certainly be the case that sometimes she says thumbs-up and sometimes she says thumbs-down. But let’s be clear why. I am not assuming that your friends are unpredictable in their opinions. Indeed you may know their tastes very well. What I am saying is rather that, if you decide to ask this friend for her opinion, it must be because you don’t know it already. That is, prior to asking you cannot predict whether or not she will recommend this particular movie. Otherwise, what is the point of asking?
Now you might ask two friends for their opinions. If you do, then it must be the case that the second friend will often disagree with the first friend. Again, I am not assuming that your friends are inherently opposed in their views of movies. They may very well have similar tastes. After all they are both your friends. But, you would not bother soliciting the second opinion if you knew in advance that it was very likely to agree or disagree with the first on this particular movie. Because if you knew that then all you would have to do is ask the first friend and use her answer to infer what the second opinion would have been.
If the two referees you consult are likely to agree one way or the other, you get more information by instead dropping one of them and bringing in your third friend, assuming he is less likely to agree.
This is all to say that disagreement is not evidence that peer-review is broken. Exactly the opposite: it is a sign that editors are doing a good job picking referees and thereby making the best use of the peer-review process.
It would be very interesting to formalize this model, derive some testable implications, and bring it to data. Good data are surely easily accessible.
In economic theory, the study of institutions falls under the general heading of mechanism design. An institution is modeled as game in which the relevant parties interact and influence the final outcome. We study how to optimally design institutions by considering how changes in the rules of the game change the way participants interact and bring about better or worse outcomes.
But when the new leaders in Egypt sit down to design a new constitution for the country, standard mechanism design will not be much help. That’s because all of mechanism design theory is premised on the assumption that the planner has in front of him a set of feasible alternatives and he is desigining the game in order to improve society’s decision over those alternatives. So it is perfectly well suited for decisions about how much a government should spend this year on all of the projects before it. But to design a constitution is to decide on procedures that will govern decisioins over alternatives that become available only in the future, and about which today’s Constitutional Congress knows nothing.
The American Constitutional Congress implicitly decided how much the United States would invest in nuclear weapons before any of them had any idea that such a thing was possible.
Designing a constitution raises a unique set of incentive problems. A great analogy is deciding on a restaurant with a group of friends. Before you start deliberating you need to know what the options are. Each of you knows about some subset of the restaurants in town and whatever procedure the group will use to ultimately decide affects whether or not you are willing to mention some of the restaurants you know about.
Ideally you would like a procedure which encourages everyone to name all the good restaurants they know about so that the group has as wide a set of choices as possible. But you can’t just indiscriminately reward people for bringing alternatives to the table because that would only lead to a long list of mostly lousy choices.
You can only expect people to suggest good restaurants if they believe that the restaurants they suggest have a chance of being chosen. And now you have to worry about strategic behavior. If I know a good Chinese restaurant but I am not in the mood for Chinese, then how are you going to reward me for bringing it up as an option?
When we think about institutions for public decisions, we have to take into account how they impact this strategic problem. Democracy may not be the best way to decide on a restaurant. If the status quo, say the Japanese restaurant is your second-favorite, you may not suggest the Mexican restaurant for fear that it will split the vote and ultimately lead to the Moroccan restaurant, your least favorite.
Certainly such political incentives affect modern day decision-making. Would a better health-care proposal have materialized were it not for fear of what it would be turned into by the political sausage mill?