You are currently browsing the tag archive for the ‘economics’ tag.
Dan Ariely, Chris Anderson, Hal Varian and others are heading Startup-Onomics: a Behavioral Economics “Summit” for entrepreneurs.
What is behavioral economics?
As business owners, we want to design products that are useful, we want customers (lots of them), and we want to create a motivating work environment. But it’s not that easy. In fact, most of the time that stuff takes a lot of hard work and a lot of trial and error.
Good news. There is a science called Behavioral Economics. This attempts to understand people’s day to day decisions (where do I get my morning coffee?) and people’s big decisions (How much should I save for retirement?).
Understanding HOW your users make decisions and WHY they make them is powerful. With this knowledge, companies can build more effective products, governments can create impactful policies and new ideas can gain faster traction.
Sessions include “How to get people to pay what they want,” and “The creation and role of habits in purchasing decisions,” etc.
It’s great, but what’s even greaterer is that they made a transcript of the whole thing and so what would take you an hour to listen to you can read in about 5 minutes. I read it from start to finish. Featuring appearances by Josh Gans, Valerie Ramey, Betsey Stevenson, Justin Wolfers and others.
The transcript is here.
Linkedin’s IPO followed the familiar pattern. Priced to initial investors at $45/share, the stock soared to a peak of $120/share on the first day of trading and closed at around $90. “Money was left on the table.” Felix Salmon has a good summary of different opinions about this and he briefly discusses auctions as an alternative to the traditional door-to-door IPO. You will recall that Google used an auction when it went public.
Like Mr. Salmon you might think that the winner’s curse would prevent an auction from discovering the right price and ensuring that money is not left on the table. The price-discovery properties of an auction rely on partially informed investors bidding according to their private information and this information thereby being reflected in the price. But a (potentially) winning bidder foresees that his win would mean that most other bidders bid less than he and that therefore “the market” is more pessmistic about the value of the firm than he is. Anticipating this, he underbids. Since all informed investors do this, the auction price will understate the value of the firm. Money will be left on the table again.
But an IPO is a multi-unit auction. Many shares are up for sale. A new strategic feature arises in these auctions, the loser’s curse. If you are outbid, then you know that many informed investors were more optimistic than you and this will make you regret having bid too low. This strategic force pushes you to raise your bid.
And in fact, as shown by Pesendorfer and Swinkels, for large auctions, under some quite general conditions, a uniform-price sealed bid auction produces a sales price which comes very close to the market’s best estimate of the value of the firm. No money is left on the table.
Do you get annoyed when someone boards the elevator with you only to ride up one floor? The stairs are right there, could they not just walk up a single flight? Well, consider this. Someone boards the elevator on the first floor with a 3rd floor destination, but instead of getting off at floor 2 and walking the last flight of stairs, they ride all the way to 3.
Doesn’t seem as annoying, right? So what explains the difference? It can’t be that you are just appalled at their laziness. Because riding to floor N rather than getting of at N-1 is just as lazy. It must be the externality.
Getting on the elevator only to ride up a single floor delays everybody else. The decision to ride to the second floor rather than the third isn’t the same because whichever he chooses the elevator is going to have to stop once.
Ah, but what if he gets on, floor 2 is already pushed but 3 is not. Then the tradeoff is the same. Because if he were to get off at floor 2 and walk he would spare everyone else the additional stop at 3. So you get annoyed at a single-floor rider you if and only if you get annoyed at this marginal-floor rider.
Well, not quite. Becuase there is one more difference. After he makes the sunk decision to get on the elevator, but before he makes the marginal decision, the problem changes. In particular, as he is riding he gains some new information: he can observe how many other people get on the elevator and are going to be affected by his decision.
This puts the marginal-floor rider in a different position than the single-floor rider in terms of social welfare. Because the single-floor rider’s decision whether to board at all is made without knowing how many other riders will be on the elevator. The marginal-floor rider can condition his decision on the number of riders.
Indeed, this means that you may even have cause to forgive Mr. Single-Floor and yet be annoyed at Ms. Marginal-Floor. He may have reasonably expected that few people, if any, were going to be inconvenienced. But if it turns out that the elevator is nearly full then the sum total of their delay due to Mr. SF’s decision to board is a sunk cost, but it’s an avoidable cost for Ms. MF. If she doesn’t get off at 2 and walk an extra flight, you all have plenty of reason to be annoyed.
This is all very important.
Also, this explains the otherwise inexplicable glass elevators, and raises the puzzle of why we don’t see them in office buildings.
Via Eli Dourado, an article in Slate by Ray Fisman on how counterfeit handbags are the gateway to the real thing.
Yet a preliminary studyfocused on counterfeit sales in China—the source of all those fake handbags in Chinatown and just about everywhere else—suggests that in many cases the sale of fakes may not be so bad for legitimate brands. The study, by Northwestern economist Yi Qian, examined the counterfeit market in the wake of well-publicized cases of food poisoning and exploding gas tanks in China, when enforcement efforts were diverted from policing fashion copycats and toward monitoring drugs, food, and gas. Counterfeit factories flourished, but surprisingly, this led to an increase in sales for high-end products in the years that followed.
Here’s one model. A fake raises your status among everyone who’s fooled. How much is that boost in status worth? Reducing enforcement makes it cheaper to buy a fake and find out. Some will learn that it’s worth more than they thought. But since not everyone is fooled by a fake they are nowwilling to pay more for the real thing to increase status on the extensive margin. These people would not have bought Prada without having first experimented with the cheap fake.
There’s a good reason to fear regulation even if you can imagine beneficial regulation. You may be suspicious of government agencies’ motives or competence to implement it. Giving power to an agent when we can’t rely on her to use it in a beneficial way is often a bad idea.
But the same kind of fear, viewed through a mirror, often argues in favor of regulation. Take for example financial regulation. We may understand that if investors, managers, or insurers could be assumed to act reliably in ways that are consistent with their self-interest then markets would work well and there would be no rationale for regulation.
But should we predicate laissez faire on the assumption that they don’t make mistakes, that they perfectly fathom the complex path to their self-interest? If we can’t be sure, then giving them the power to do damage is often a bad idea.
(Drawing: Negative Space from www.f1me.net)
She wrote this convincing essay on happiness and parenting. Parents seem to be less happy but we shouldn’t read too much into that. She brings together all kinds of economic theory and data and along the way she cites a paper I like very much by Luis Rayo and Gary Becker:
Nobel Prize–winning economist Gary Becker, writing with Luis Rayo, has argued this contrary position. In their view, while “happiness and life satisfaction may be related to utility, they are no more measures of utility than are other dimensions of well-being, such as health or consumption of material goods.”[5] Or having kids. Children may make you less happy, but still raise your utility. Devout neoclassical reasoning leads Becker and Rayo to infer from the fact that we are having kids that they raise your utility (or at least they raise the utility of those who make this choice).
Rayo and Becker argued that happiness should be thought of as the carrot that gets us to make good decisions. But happiness is a scarce resource. There’s a limit to how happy you can be. So it has to be used in the most economical way. In their theory the most economical way to use happiness is to give an immediate, and completely transitory boost of happiness to reward good outcomes. You have sex, you get rewarded. It results in conception, that’s another reward. But then you are back to the baseline so as to maximize the range available for further rewards (and penalties) motivating behavior going forward. Bygones are bygones.
With that theory it makes no sense to look at a cross section of the population, compare how happy are people who did X relative to people who didn’t do X, and conclude on the basis of that whether its good to do X.
And by the way, if there is anything we can expect evolutionary incentives to have a good handle on, its whether or not to have kids. That’s the whole ballgame. If happiness is there to motivate us to succeed evolutionarily then you better have a good argument why Nature got it wrong. One place to look might be on the quantity/quality tradeoff. Perhaps the relative price of quality versus quantity has declined in modern times and Nature’s mechanism is tuned to an obsolete tradeoff. If so, then people feel a motivation to have more kids than they should. The prescription then would be to resist the temptation you feel to have another kid and instead invest more in the ones you have. Unless you want to be happy.
Andrew Caplin told us about a new experiment that adds to the debate about “nudges.”
We have initiated experiments to study this tradeoff experimentally in a setting where imperfect perception seems highly likely and choice quality is easy to measure. In each round, subjects are presented with three options, each of which is composed of 20 numbers. The value of each option is the sum of the 20 numbers, and subjects are incentivized to select the object with the highest value. In the baseline treatment (“33%, 33%, 33%”), subjects were informed that all three options were equally likely to be the highest valued option, but in two other treatments, they were nudged towards the first option. In one of the nudge treatments (“40%, 30%, 30%”), subject were informed that the first option was 40% likely to be the highest valued option (the other two were both 30% likely). In the other nudge treament (“45%, 27.5%, 27.5%”), subjects were told that the first option was 45% likely to be the highest valued option (the other two were both 27.5% likely). Subjects completed 12 rounds of each treatment, which were presented in a random order.
The subjects got the best option only 54% of the time revealing that effort was required to add up all 20 numbers three times to find the largest sum. The nudges gave them hints but notice that the hints also lower the return to search effort. So in theory there will be both income and substitution effects. And in the experiment you see evidence of both. Their choices reveal that they utilized the hints: they more often chose the highlighted alternative. But, the interesting finding is that their chances of getting the best alternative did not increase. In essence, the hint perfectly crowded out their own search effort.
You could take a pessimistic view based on this: nudges don’t improve outcomes, they just make people lazier. But in fact the experiment suggests a nuanced interpretation of nudges. Even if we don’t see any evidence that, say published calorie counts improve the quality of decisions, that doesn’t imply that they have no welfare effects. Information is a fungible resource. If you give people information, they can save the effort of gathering it themselves. Given that information is a public good, these are potentially large welfare gains that would be hard to measure directly.
That was the title of a very interesting talk at the Biology and Economics conference I attended over the weekend at USC. The authors are Juan Carillo, Isabelle Brocas and Ricardo Alonso. It’s basically a model of how multitasking is accomplished when different modules in the brain are responsible for specialized tasks and those modules require scarce resources like oxygen in order to do their job. (I cannot find a copy of the paper online.)
The brain is modeled as a kludgy organization. Imagine that the listening-to-your-wife division and the watching-the-French-Open division of YourBrainINC operate independently of one another and care about nothing but completing their individual tasks. What happens when both tasks are presented at the same time? In the model there is a central administrator in charge of deciding how to ration energy between the two divisions. What makes this non-trivial is that only the individual divisions know how much juice they are going to need based on the level of difficulty of this particular instance of the task.
Here’s the key perspective of the model. It is assumed that the divisions are greedy: they want all the resources they need to accomplish their task and only the central administrator internalizes the tradeoffs across the two tasks. This friction imposes limits on efficient resource allocation. And these limits can be understood via a mechanism design problem which is novel in that there are no monetary transfers available. (If only the brain had currency.)
The optimal scheme has a quota structure which has some rigidity. There is a cap on the amount of resources a given division can utilize and that cap is determined solely by the needs of the other division. (This is a familiar theme from economic incentive mechanisms.) An implication is that there is too little flexibility in re-allocating resources to difficult tasks. Holding fixed the difficulty of task A, as the difficulty of task B increases, eventually the cap binds. The easy task is still accomplished perfectly but errors start to creep in on the difficult task.
(Drawing: Our team is non-hierarchical from www.f1me.net)
So why are these the current “market probabilities” for American Idol?
- Lauren Alaina to be eliminated tonight: 50%
- Haley Reinhart to be eliminated tonight: 58%
- Scotty McReery to be eliminated tonight: 15%
The winning percentages also add up to more than 100. Is it not possible to short them all?
Thanks to Zeke for the pointer.
I know of that line of apparel only because I have seen the name stenciled across the shirts and sweaters of its devotees. I infer that they are really nice clothes. Somehow I want to own some.
Which makes me wonder why they are not just giving their clothes away. We get free shirts, they get to drape their brand name across our bodies. Perhaps they would be selective about which bodies, but there must be a market opportunity here. If brand recognition drives sales then the eventual premium they could charge would seem to justify a lot of free hoodies up front. How else can we explain Abercrombie and Fitch, once a middling brand of fishing/hunting wear now international purveyors of pre-teen libido?
Normally this kind of rent seeking would be doubly inefficient. Resources wasted in a competition to corner the market, then the inefficient scarcity under the resulting monopoly. But in this case the rent seeking behavior involves giving away the stuff that’s eventually going to be so scarce. Moreover, since we apparently want to wear only the coolest clothes, the eventual monopoly may in fact be the first-best outcome. So we have firms competing to create the surplus maximizing market structure and in the process handing out all the accompanying rents in the form of euro-inscripted jeggings.
Here is a problem at has been in the back of my mind for a long time. What is the second best dominant-strategy mechanism (DSIC) in a market setting?
For some background, start with the bilateral trade problem of Myerson-Satterthwaite. We know that among all DSIC, budget-balanced mechanisms the most efficient is a fixed-price mechanism. That is, a price is fixed ex ante and the buyer and seller simply announce whether they are willing to trade at that price. Trade occurs if and only if both are willing and if so the buyer pays the fixed price to the seller. This is Hagerty and Rogerson.
Now suppose there are two buyers and two sellers. How would a fixed-price mechanism work? We fix a price p. Buyers announce their values and sellers announce their costs. We first see if there are any trades that can be made at the fixed price p. If both buyers have values above p and both sellers have values below then both units trade at price p. If two buyers have values above p and only one seller has value below p then one unit will be sold: the buyers will compete in a second-price auction and the seller will receive p (there will be a budget surplus here.) Similarly if the sellers are on the long side they will compete to sell with the buyer paying p and again a surplus.
A fixed-price mechanism is no longer optimal. The reason is that we can now use competition among buyers and sellers and “price discovery.” A simple mechanism (but not the optimal one) is a double auction. The buyers play a second-price auction between themselves, the sellers play a second-price reverse auction between themselves. The winner of the two auctions have won the right to trade. They will trade if and only if the second highest buyer value (which is what the winning buyer will pay) exceeds the second-lowest seller value (which is what the winning seller will receive.) This ensures that there will be no deficit. There might be a surplus, which would have to be burned.
This mechanism is DSIC and never runs a deficit. It is not optimal however because it only sells one unit. But it has the viture of allowing the “price” to adjust based on “supply and demand.” Still, there is no welfare ranking between this mechanism and a fixed-price mechanism because a fixed price mechanism will sometimes trade two units (if the price was chosen fortuitously) and sometimes trade no units (if the price turned out too high or low) even though the price discovery mechanism would have traded one.
But here is a mechanism that dominates both. It’s a hybrid of the two. We fix a price p and we interleave the rules of the fixed-price mechanism and the double auction in the following order
- First check if we can clear two trades at price p. If so, do it and we are done.
- If not, then check if we can sell one unit by the double auction rules. If so, do it and we are done.
- Finally, if no trades were executed using the previous two steps then return to the fixed-price and see if we can execute a single trade using it.
I believe this mechanism is DSIC (exercise for the reader, the order of execution is crucial!). It never runs a deficit and it generates more trade than either standalone mechanism: fixed-price or double auction.
Very interesting research question: is this a second-best mechanism? If not, what is? If so, how do you generalize it to markets with an arbitrary number of buyers and sellers?
A buyer and a seller negotiating a sale price. The buyer has some privately known value and the seller has some privately known cost and with positive probability there are gains from trade but with positive probability the seller’s cost exceeds the buyers value. (So this is the Myerson-Satterthwaite setup.)
Do three treatments.
- The experimenter fixes a price in advance and the buyer and seller can only accept or reject that price. Trade occurs if and only if they both accept.
- The seller makes a take it or leave it offer.
- The parties can freely negotiate and they trade if and only if they agree on a price.
Theoretically there is no clear ranking of these three mechanisms in terms of their efficiency (the total gains from trade realized.) In practice the first mechanism clearly sacrifices some efficiency in return for simplicity and transparency. If the price is set right the first mechanism would outperform the second in terms of efficiency due to a basic market power effect. In principle the third treatment could allow the parties to find the most efficient mechanism, but it would also allow them to negotiate their way to something highly inefficient.
A conjecture would be that with a well-chosen price the first mechanism would be the most efficient in practice. That would be an interesting finding.
A variation would be to do something similar but in a public goods setting. We would again compare simple but rigid mechanisms with mechanisms that allow for more strategic behavior. For example, a version of mechanism #1 would be one in which each individual was asked to contribute an equal share of the cost and the project succeeds if and only if all agree to their contributions. Mechanism #3 would allow arbitrary negotation with the only requirement be that the total contribution exceeds the cost of the project.
In the public goods setting I would conjecture that the opposite force is at work. The scope for additional strategizing (seeding, cajoling, guilt-tripping, etc) would improve efficiency.
Anybody know if anything like these experiments have been done?

Its a recent development that economists are turning to neuroscience to inform and enrich economic theory. One controversial aspect is the potential use of neuroscience data to draw conclusions about welfare that go beyond traditional revealed preference. It is nicely summarized by this quote from Camerer, Lowenstein, and Prelec.
The foundations of economic theory were constructed assuming that details about the functioning of the brain’s black box would not be known. This pessimism was expressed by William Jevons in 1871:
I hesitate to say that men will ever have the means of measuring directly the feelings of the human heart. It is from the quantitative effects of the feelings that we must estimate their comparative amounts.
Since feelings were meant to predict behavior but could only be assessed from behavior, economists realized that, without direct measurement, feelings were useless intervening constructs. In the 1940s, the concepts of ordinal utility and revealed preference eliminated the superfluous inter- mediate step of positing immeasurable feelings. Revealed preference theory simply equates unobserved preferences with observed choices…
But now neuroscience has proved Jevons’s pessimistic prediction wrong; the study of the brain and nervous system is beginning to allow direct measurement of thoughts and feelings.
There are skeptics, I don’t count myself as one of them. I expect that we will learn from neuroscience and economics will benefit. But, I think it is helpful to explore the boundaries and I have a little thought experiment that I think sheds some light.
Imagine a neuroscientist emerges from his lab with a theory of what makes people happy. This theory is based on measuring activity in the brain and correlating it with measures of happiness and then repeated experiments studying how different activities affect happiness. For the purposes of this thought experiment be as generous as you wish to the neuroscientist, assume he has gone as far as you think is possible in measuring thoughts and feelings and their causes.
Now the neuroscientist approaches his first new patient and explains to him how to change his behavior in order to achieve the optimum level of well-being according to his theory, and asks the patient to give it a try. After a month of trying it out, imagine that the patient comes back and says “Doctor, I did everything you prescribed to the letter for one whole month. But, with all due respect, I would prefer to just go back to doing what I was doing before.”
Ask yourself if there is any circumstance, including any imaginable level of neuroscientific sophistication, under which after the patient tries and rejects the neuroscientist’s theory, you would accept a policy which over-rode the patient’s wishes and imposed upon him the lifestyle that the neuroscientist says is good for him.
If there is no circumstance then I claim you are fundamentally a revealed preference adherent. Because the example (again, I am asking you to be as charitable as you can be to the neuroscientist) presents the strongest possible case for including non-choice data into welfare considerations. We are allowing the patient to experience what the neuroscientist’s theory asserts to be his greatest possible state of well-being and even after experiencing that he is choosing not to experience it any more. If you insist that he has that freedom then you are deferring to his revealed preference over his “true” welfare.
That’s not to say that you must reject neuroscience as being valuable for welfare. Indeed it may be that when the patient goes his own way he does voluntarily incorporate some of what he learned. And so, even by a revealed preference standard could say that neuroscience has made him better off. But we can clearly bound its contribution. Neuroscience can make you better off only insofar as it can provide you with new information that you are free to use or reject as you prefer.
Drawing: Anxiety or Imagination from www.f1me.net
Kobe Bryant was recently fined $100,000 for making a homophobic comment to a referee. Ryan O’Hanlon writing for The Good Men Project blog puts it into perspective:
- It’s half as bad as conducting improper pre-draft workouts.
- It’s twice as bad as saying you want to leave the NBA and go home.
- It’s just as bad as talking about the collective bargaining agreement.
- It’s twice as bad as saying one of your players used to smoke too much weed.
- It’s just as bad as writing a letter in Comic Sans about a former player.
- It’s just as bad as saying you want to sign the best player in the NBA.
- It’s four times as bad as throwing a towel to distract a guy when he’s shooting free throws.
- It’s four times as bad as kicking a water bottle.
- It’s 10 times as bad as standing in front of your bench for an extended period of time.
- It’s 10 times as bad as pretending to be shot by a guy who once brought a gun into a locker room.
- It’s 13.33 times as bad as tweeting during a game.
- It’s five times as bad as throwing a ball into the stands.
- It’s four times as bad as throwing a towel into the stands.
- It’s twice as bad as lying about smelling like weed and having women in a hotel room during the rookie orientation program.
- It’s one-fifth as bad as snowboarding.
That’s based on a comparison of the fines that the various misdeeds earned. The “n times as bad” is the natural interpretation of the fines since we are used to thinking of penalties as being chosen to fit the crime. But NBA justice needn’t conform to our usual intuitions because this is an employer/employee relationship governed by actual contract, not just social contract. We could try to think of these fines as part of the solution to a moral hazard problem. Independent of how “bad” the behaviors are, there are some that the NBA wants to discourage and fines are chosen in order to get the incentives right.
But that’s a problematic interpretation too. From the moral hazard perspective the optimal fine for many of these would be infinite. Any finite fine is essentially a license to behave badly as long as the player has a strong enough desire to do so. Strong enough to outweigh the cost of the fine. You can’t throw a towel to distract a guy when he’s shooting free throws unless its so important to you that you are willing to pay $250,000 for the privilege.
You can rescue moral hazard as an explanation in some cases because if there is imperfect monitoring then the optimal fine will have to be finite. Because with imperfect monitoring the fine cannot be a perfect deterrent. For example it may not possible to detect with certainty that you were lying about smelling like weed and having women in a hotel room during the rookie orientation program. If so then the false positives will have to be penalized. And when the fine will be paid with positive probability even with players on their best behavior you are now trading off incentives vs. risk exposure.
But the imperfect monitoring story can’t explain why Comic Sans doesn’t get an infinite fine, purifying the game of that transgression once and for all. Or tweeting, or snowboarding or most of the others as well.

It could be that the NBA knows that egregious fines can be contested in court or trigger some other labor dispute. This would effectively put a cap on fines at just the level where it is not worth the player’s time and effort to dispute it. But that doesn’t explain why the fines are not all pegged at that cap. It could be that the likelihood that a fine of a given magnitude survives such a challenge depends on the public perception of the crime . That could explain some of the differences but not many. Why is the fine for saying you want to leave the NBA larger than the fine for throwing a ball into the stands?
Once we’ve dispensed with those theories it just might be that the NBA recognizes that players simply want to behave badly sometimes. Without that outlet something else is going to give. Poor performance perhaps or just an eventual Dennis Rodman. The NBA understands that a fine is a price. And with the players having so many ways of acting out to choose from, the NBA can use relative prices to steer them to the efficient frontier. Instead of kicking a water bottle, why not get your frustrations out by sending 3 1/2 tweets during the game? Instead of saying that one of your players smokes too much weed, go ahead and indulge your urge to stand out in front of the bench for an extended period of time. You can do it for 5 times as long as the last guy or even stand 5 times farther out.
Not surprisingly, all of these choices start to look like real bargains compared to snowboarding and impoper pre-draft workouts.
There is a study by some economists and statisticans on the correlation between the price of a wine and ratings in blind tastings by tasters who are not informed of the price. The headline result in the paper is that higher priced wines don’t get higher ratings. If anything they get lower ratings. It is typically used in the first paragraph of blog posts to set up various theories about how people use price information to tell themselves what they should and shouldn’t like. (For example, here’s Jonah Lehrer.)
But why should we expect higher priced wine to get higher ratings in tastings? Suppose there are 100 different styles of wine and for every different style there is a group that likes that style and only that style. There will be a lot of variation in the price of different styles because the price will depend on the supply of that style and the size of the group that likes that style. Now ask a person to taste a randomly selected wine and rate it. There will be no correlation between price and ratings.
There are many styles of cheese with different prices. Would we expect the price of cheese to predict ratings in blind tastings?
Here’s another variation on the same idea. Suppose there are just two styles of wine, subtle and not-so-subtle. Some people appreciate the subtlety but most don’t. Suppose that the supply of subtle wine is lower so that its price is higher. Then again a study like this will produce an overall negative correlation between price and ratings.
And indeed if you read past page 3 of the paper you see that an effect like this is in the data.
Our data also indicates that experts, unlike non-experts, on average assign as high – or higher – ratings to more expensive wines. The coefficient on the expert*price interaction term is positive and highly statistically significant. The price coefficient for non-experts is negative, and about the same size as in the baseline model. The net coefficient on price for experts is the sum of these two coefficients. It is positive and marginally statistically significant.
The linear estimator offers an interpretation of these effects. In terms of a 100 point scale (such as that used by Wine Spectator), the extended model predicts that for a wine that costs ten times more than another wine, non-experts will on average assign an overall rating that is about four points lower, whereas experts will assign an overall rating that is about seven points higher.
Seth Godin writes:
When two sides are negotiating over something that spoils forever if it doesn’t get shipped, there’s a straightforward way to increase the value of a settlement. Think of it as the net present value of a stream of football…
Any Sunday the NFL doesn’t play, the money is gone forever. You can’t make up for it later by selling more football–that money is gone. The owners don’t get it, the players don’t get it, the networks don’t get it, no one gets it.
The solution: While the lockout/strike/dispute is going on, keep playing. And put all the profit/pay in an escrow account. Week after week, the billions and billions of dollars pile up. The owners see it, the players see it, no one gets it until there’s a deal.
There are two questions you have to ask if you are going to evaluate this idea. First, what would happen if you change the rules in this way? Second, would the parties actually agree to it?
Bargaining theory is one of the most unsettled areas of game theory, but there is one very general and very robust principle. What drives the parties to agreement is the threat of burning surplus. Any time a settlement proposal on the table it comes with the following interpretation: “if you don’t agree to this now you better expect to be able to negotiate for a significantly larger share on the next round because between now and then a big chunk of the pie is going to disappear.” Moreover it is only through the willingness to let the pie shrink that either party can prove that he is prepared to make big sacrifices in order to get that larger share.
So while the escrow idea ensures that there will be plenty of surplus once they reach agreement, it has the paradoxical effect of making agreement even more difficult to reach. In the extreme it makes the timing of the agreement completely irrelevant. What’s the point of even negotiating today when we can just wait until tomorrow?
But of course who cares when and even whether they eventually agree? All we really want is to see football right? And even if they never agree how to split the mounting surplus, this protocol keeps the players on the field. True, but that’s why we have to ask whether the parties would actually accept this bargaining game. After all if we just wanted to force the players to play we wouldn’t have to get all cute with the rules of negotiation, we could just have an act of Congress.
And now we see why proposals like this can never really help because they just push the bargaining problem one step earlier, essentially just changing the terms of the negotiation without affecting the underlying incentives. As of today each party is looking ahead expecting some eventual payoff and some total surplus wasted. Godin’s rules of negotiation would mean that no surplus is wasted so that each party could expect an even higher eventual payoff. But if it were possible to get the two parties to agree to that then for exactly the same reason under the old-fashioned bargaining process there would be a proposal for immediate agreement with the same division of the spoils on the table today and inked tomorrow.
Still it is interesting from a theoretical point of view. It would make for a great game theory problem set to consider how different rules for dividing the accumulated profits would change the bargaining strategies. The mantra would be “Ricardian Equivalence.”
There was all this discussion about Steven Landsburg’s taxation example.
Nothing makes my job easier than a journalist who writes about something interesting and gets it 100% wrong.
Thanks, then, to Elizabeth Lesly Stevens for her column in yesterday’s Bay Citizen. Stevens wants to tax the “idle rich”, her Exhibit A being Robert Kendrick, heir to the $84 million Schlage Lock Company fortune. According to Ms. Stevens, Mr. Kendrick appears to do pretty much nothing but park and re-park his four cars all day long. Taxing people like Mr. Kendrick, she says, has to be part of any solution to America’s fiscal crisis.
Here’s what Ms. Stevens misses: Assuming the facts are as she states them, it is quite literally impossible to raise revenue by taxing the likes of Mr. Kendrick. We could argue about whether it’s desirable, but because it’s impossible, the discussion is moot.
The point being that once we look at the real economy, i.e. the allocation of goods and services and how that would be altered by taxing Mr. Kendrick, we see that since he is consuming nothing any increase in consumption by the goverment must be taking resources away from somebody else.
If that doesn’t persuade you then consider this. Suppose Kendrick puts all of his assets into a pile of cash and burns it. There is no affect on anybody’s consumption. (Assume he gets no consumption value from the bonfire.) If at the same time the governement prints an equal number of dollars and spends it, consumption allocations have been altered but not Mr. Kendrick’s. Whatever goods and services the government consumes must come from somebody other than he. Now observe that there is no difference at all between the scenario in which the money is burned by one party and printed by another and the scenario in which it is handed over directly through a tax.
Professor Landsburg makes a contribution by presenting these examples which force us to think carefully about concepts we normally take for granted. Indeed he is even willing to adopt the persona of a smug provocateur to get his point across, and we owe him our thanks for that sacrifice in service of the greater good.
On the other hand we should recognize that this exercise is really beside the point. The government certainly can raise revenue by taking Mr. Kendrick’s assets. The fact is that the dollar value of his assets are a claim on goods and services that will eventually be exercised by whomever inherits the assets. Taxing his assets today means taking those claims away from them. Moreover, the real allocation of resources will be altered in a way that is right in line with the spirit of the original columnist’s motivation. The government will consume more today, others will save more today. Those savers will consume more in the future and Mr. Kendrick’s windfall heirs will consume less.
Tyler raises the cash-grants versus Medicare question:
Nonetheless I propose a more modest version of the idea. When people turn a certain age, allow them to trade in the current benefits package for a minimalistic package (set broken limbs and offer lots of potent painkillers), plus some of the rest in cash, doled out over the years if need be. For some people, medical tourism will fill the gap.
Even if you believe that cash grants are a more efficient way to achieve whatever end Medicare does you should still be opposed to this idea. Because Medicare will never go away forever. You can “replace” it with cash grants but eventually people will notice again that old people want health care subsidies and then you will have both cash grants and Medicare.
Indeed, we already had cash grants in the form of Social Security when Medicare was introduced as a supplement to Social Security in 1965.
Your home is underwater but you can’t use that to keep your lawn green and the homeowner’s association is threatening to sue, what do you do? Paint it.
The grass spraying business took off here as the housing crisis escalated and real estate brokers were looking to quickly increase the curb appeal of abandoned properties on the cheap. A lawn painting, using a vegetable-based dye, can cost about $200. Vigorous homeowners’ associations, which can fine owners thousands of dollars if a dispute drags on, have also been good for business, said Klaus Lehmann of Turf-Painters Enterprise.
This is the third and final post on ticket pricing motivated by the new restaurant Next in Chicago and proprietors Grant Achatz and Nick Kokonas new ticket policy. In the previous two installments I tried to use standard mechanism design theory to see what comes out when you feed in some non-standard pricing motives having to do with enhancing “consumer value.” The two attempts that most naturally come to mind yielded insights but not a useful pricing system. Today the third time is the charm.
Things start to come in to place when we pay close attention to this part of Nick’s comment to us:
we never want to invert the value proposition so that customers are paying a premium that is disproportionate to the amount of food / quality of service they receive.
I propose to formalize this as follows. From the restaurant’s point of view, consumer surplus is valuable but some consumers are prepared to bid even more than the true value of the service they will get. The restaurant doesn’t count these skyscraping bids as actually reflecting consumer surplus and they don’t want to tailor their mechanism to cater to them. In particular, the restaurant distinguishes willingness to pay from “value.”
I can think of a number of sensible reasons they would take this view. They might know that many patrons overestimate the value of a seating at Next. Indeed the restaurant might worry that high prices by themselves artificially inflate willingness to pay. They don’t want a bubble. And they worry about their reputation if someone pays $1700 for a ticket, gets only $1000 worth of value and publicly gripes. Finally they might just honestly believe that willingness to pay is a poor measure of welfare especially when comparing high versus low.
Whatever the reason, let’s run with it. Let’s define to be the value, as the restaurant perceives it, that would be realized by service to a patron whose willingness to pay is
. One natural example would be
where is some prespecified “cap.” It would be like saying that nobody, no matter how much they say they are willing to pay, really gets a value larger than, say
from eating at Next.
Now let’s consider the optimal pricing mechanism for a restaurant that maximizes a weighted sum of profit and consumer’s surplus, where now consumer’s surplus is measured as the difference between and whatever price is paid. The weight on profit is
and the weight on consumer surplus is
. After you integrate by parts you now get the following formula for virtual surplus.
And now we have something! Because if is between
and
then the first term is increasing in
(up to the cap
) and the second term is decreasing. For
close enough to
, the overall virtual surplus is going to be first increasing and then decreasing. And that means that the optimal mechanism is something new. When bids are in the low to moderate range, you use an auction to decide who gets served. But above some level, high bidders don’t get any further advantage and they are all lumped together.
The optimal mechanism is a hybrid between an auction and a lottery. It has no reserve price (over and above the cost of service) so there are never empty seats. It earns profits but eschews exorbitant prices.
It has clear advantages over a fixed price. A fixed price is a blunt instrument that has to serve two conflicting purposes. It has to be high enough to earn sufficient revenue on dates when demand is high enough to support it, but it can’t be too high that it leads to empty seats on dates when demand is lower. An auction with rationing at the top is flexible enough to deal with both tasks independently. When demand is high the fixed price (and rationing) is in effect. When demand is low the auction takes care of adjusting the price downward to keep the restaurant full. The revenue-enhancing effects of low prices is an under-appreciated benefit of an auction. Finally, it’s an efficient allocation system for the middle range of prices so scalping motivations are reduced compared to a fixed price.
Incentives for scalping are not eliminated altogether because of the rationing at the top. This can be dealt with by controlling the resale market. Indeed here is one clear message that comes out of all of this. Whatever motivation the restaurant has for rationing sales, it is never optimal to allow unfettered resale of tickets. That only undermines what you were trying to achieve. Now Grant Achatz and Nick Kokonas understand that but they are forced to condone the Craigslist market because by law non-refundable tickets must be freely transferrable.
But the cure is worse than the disease. In fact refundable tickets are your friend. The reason someone wants to return their ticket for a refund is that their willingness to pay has dropped below the price. But there is somebody else with a willingness to pay that is above the price. We know this for sure because tickets are being rationed at that price. Granting the refund allows the restaurant to immediately re-sell it to the next guy waiting in line. Indeed, a hosted resale market would enable the restaurant to ensure that such transactions take place instantaneously through an automated system according to the same terms under which tickets were originally sold.
Someone ought to try this.
A meditation on tipping in Australia versus the United States.
And Manhattan is really cool these days. Especially with the Aussie kicking seven kinds of Chinese tripe out of the greenback. But that rest room, it was a marvel. If it was a person I’d say it’d been scrubbed until its bellybutton shined. The mountain of crisp, white, freshly laundered hand towels never got any smaller despite the constant stream of punters using and discarding them. The wash basin, gleaming and shining, fairly groaned under the weight of the vast selection of cleansing gels, moisturizers, and other masculine hygiene products with which I must profess myself completely unfamiliar. Not one stray, errant drop marked the floor of this restroom. Nary a single pubic hair had escaped to run wild on the immaculate tiling. And it was all thanks to the dude from Senegal who was doing it for minimum wage and tips.
It seems that the toilets are not so clean in Oz. And tipping, evidently an American import, hasn’t exactly captured the imagination down under. The comments following the article are especially entertaining.
Restaurants, touring musicians, and sports franchises are not out to gouge every last penny out of their patrons. They want patrons to enjoy their craft but also to come away feeling like they didn’t pay an arm and a leg. Yesterday I tried to formalize this motivation as maximizing consumer surplus but that didn’t give a useful answer. Maximizing consumer surplus means either complete rationing (and zero profit) or going all the way to an auction (a more general argument why appears below.) So today I will try something different.
Presumably the restaurant cares about profits too. So it makes sense to study the mechanism that maximizes a weighted sum of profits and consumer’s surplus. We can do that. Standard optimal mechanism design proceeds by a sequence of mathematical tricks to derive a measure of a consumer’s value called virtual surplus. Virtual surplus allows you to treat any selling mechanism you can imagine as if it worked like this
- Consumers submit “bids”
- Based on the bids received the seller computes the virtual surplus of each consumer.
- The consumer with the highest virtual surplus is served.
If you write down the optimal mechanism design problem where the seller puts weight on profits and weight
on consumer surplus, and you do all the integration by parts, you get this formula for virtual surplus.
where is the consumer’s willingness to pay,
is the proportion of consumers with willingness to pay less than
and
is the corresponding probability density function. That last ratio is called the (inverse) hazard rate.
As usual, just staring down this formula tells you just about everything you want to know about how to design the pricing system. One very important thing to know is what to do when virtual surplus is a decreasing function of . If we have a decreasing virtual surplus then we learn that it’s at least as important to serve the low valuation buyers as those with high valuations (see point 3 above.)
But here’s a key observation: its impossible to sell to low valuation buyers and not also to high valuation buyers because whatever price the former will agree to pay the latter will pay too. So a decreasing virtual surplus means that you do the next best thing: you treat high and low value types the same. This is how rationing becomes part of an optimal mechanism.
For example, suppose the weight on profit is equal to
. That brings us back to yesterday’s problem of just maximizing consumer surplus. And our formula now tells us why complete rationing is optimal because it tells us that virtual surplus is just equal to the hazard rate which is typically monotonically decreasing. Intuitively here’s what the virtual surplus is telling us when we are trying to maximize consumer surplus. If we are faced with two bidders and one has a higher valuation than the other, then to try to discriminate would require that we set a price in between the two. That’s too costly for us because it would cut into the consumer surplus of the eventual winner.
So that’s how we get the answer I discussed yesterday. Before going on I would like to elaborate on yesterday’s post based on correspondence I had with a few commenters, especially David Miller and Kane Sweeney. Their comments highlight two assumptions that are used to get the rationing conclusion: monotone hazard rate, and no payments to non-buyers. It gets a little more technical than usual so I am going to put it here in an addendum to yesterday (scroll down for the addendum.)
Now back to the general case we are looking at today, we can consider other values of
An important benchmark case is when virtual surplus reduces to just
, now monotonically increasing. That says that a seller who puts equal weight on profits and consumer surplus will always allocate to the highest bidder because his virtual surplus is higher. An auction does the job, in fact a second price auction is optimal. The seller is implementing the efficient outcome.
More interesting is when is between
and
. In general then the shape of the virtual surplus will depend on the distribution
, but the general tendency will be toward either complete rationing or an efficient auction. To illustrate, suppose that willingness to pay is distributed uniformly from
to
. Then virtual suplus reduces to
which is either decreasing over the whole range of (when
), implying complete rationing or increasing over the whole range (when
), prescribing an auction.
Finally when virtual surplus is the difference between an increasing function and a decreasing function and so it is increasing over the whole range and this means that an auction is optimal (now typically with a reserve price above cost so that in return for higher profits the restaurant lives with empty tables and inefficiency. This is not something any restaurant would choose if it can at all avoid it.)
What do we conclude from this? Maximizing a weighted sum of consumer surplus and profit yields again yields one of two possible mechanisms: complete rationing or an auction. Neither of these mechanisms seem to fit what Nick Kokonas was looking for in his comment to us and so we have to go back to the drawing board again.
Tomorrow I will take a closer look and extract a more refined version of Nick’s objective that will in fact produce a new kind of mechanism that may just fit the bill.
Addendum: Check out these related papers by Bulow and Klemperer (dcd: glen weyl) and by Daniele Condorelli.
Last week, in response to our proposal for how to run a ticket market, Nick Kokonas of Next Restaurant wrote something interesting.
Simply, we never want to invert the value proposition so that customers are paying a premium that is disproportionate to the amount of food / quality of service they receive. Right now we have it as a great bargain for those who can buy tickets. Ideally, we keep it a great value and stay full.
Economists are not used to that kind of thinking and certainly not accustomed to putting such objectives into our models, but we should. Many sellers share Nick’s view and the economist’s job is to show the best way to achieve a principal’s objective, whatever it may be. We certainly have the tools to do it.
Here’s an interesting observation to start with. Suppose that we interpret Nick as wanting to maximize consumer surplus. What pricing mechanism does that? A fixed price has the advantage of giving high consumer surplus when willingness to pay is high. The key disadvantage is rationing: a fixed price has no way of ensuring that the guy with a high value and therefore high consumer surplus gets served ahead of a guy with a low value.
By contrast an auction always serves the guy with the highest value and that translates to higher consumer surplus at any given price. But the competition of an auction will lead to higher prices. So which effect dominates?
Here’s a little example. Suppose you have two bidders and each has a willingness to pay that is distributed according the uniform distribution on the interval . Let’s net out the cost of service and hence take that to be zero.
If you use a rationing system, each bidder has a 50-50 chance of winning and paying nothing (i.e. paying the cost of service.) So a bidder whose value for service is will have expected consumer surplus equal to
.
If instead you use an auction, what happens? First, the highest bidder will win so that a bidder with value wins with probability
. (That’s just the probability that his opponent had a lower value.) For bidders with high values that is going to be higher than the 50-50 probability from the rationing system. That’s the benefit of an auction.
However he is going to have to pay for it and his expected payment is . (The simplest way to see this is to consider a second-price auction where he pays his opponent’s bid. His opponent has a dominant strategy to bid his value, and with the uniform distribution that value will be
on average conditional on being below
.) So his consumer surplus is only
because when he wins his surplus is his value minus his expected payment , and he wins with probability
.
So in this example we see that, from the point of view of consumer surplus, the benefits of the efficiency of an auction are more than offset by the cost of higher prices. But this is just one example and an auction is just one of many ways we could think of improving upon rationing.
However, it turns out that the best mechanism for maximizing consumer surplus is always complete rationing (I will prove this as a part of a more general demonstration tomorrow.) Set price equal to marginal cost and use a lottery (or a queue) to allocate among those willing to pay the price. (I assume that the restaurant is not going to just give away money.)
What this tells us is that maximizing consumer surplus can’t be what Nick Kokonas wants. Because with the consumer surplus maximizing mechanism, the restaurant just breaks even. And in this analysis we are leaving out all of the usual problems with rationing such as scalping, encouraging bidders with near-zero willingness to pay to submit bids, etc.
So tomorrow I will take a second stab at the question in search of a good theory of pricing that takes into account the “value proposition” motivation.
Addendum: I received comments from David Miller and Kane Sweeney that will allow me to elaborate on some details. It gets a little more technical than the rest of these posts so you might want to skip over this if you are not acquainted with the theory.
David Miller reminded me of a very interesting paper by Ran Shao and Lin Zhou. (See also this related paper by the same authors.) They demonstrate a mechanism that achieves a higher consumer surplus than the complete rationing mechanism and indeed that achieves the highest consumer surplus among all dominant-strategy, individually rational mechanisms.
Before going into the details of their mechanism let me point out the difference between the question I am posing and the one they answer. In formal terms I am imposing an additional constraint, namely that the restaurant will not give money to any consumer who does not obtain a ticket. The restaurant can give tickets away but it won’t write a check to those not lucky enough to get freebies. This is the right restriction for the restaurant application for two reasons. First if the restaurant wants to maximize consumer surplus its because it wants to make people happy about the food they eat, not happy about walking away with no food but a payday. Second, as a practical matter a mechanism that gives away money is just going to attract non-serious bidders who are looking for a handout.
In fact Shao and Zhou are starting from a related but conceptually different motivation: the classical problem of bilateral trade between two agents. In the most natural interpretation of their model the two bidders are really two agents negotiating the sale of an object that one of them already owns. Then it makes sense for one of the agents to walk away with no “ticket” but a paycheck. It means that he sold the object to the other guy.
Ok with all that background here is their mechanism in its simplest form. Agent 1 is provisionally allocated the ticket (so he becomes the seller in the bilateral negotiation.) Agent 2 is given the option to buy from agent 1 at a fixed price. If his value is above that price he buys and pays to agent 1. Otherwise agent 1 keeps the ticket and no money changes hands. (David in his comment described a symmetric version of the mechanism which you can think of as representing a random choice of who will be provisionally allocated the ticket. In our correspondence we figured out that the payment scheme for the symmetric version should be a little different, it’s an exercise to figure out how. But I didn’t let him edit his comment. Ha Ha Ha!!!)
In the uniform case the price should be set at 50 cents and this gives a total surplus of 5/8, outperforming complete rationing. Its instructive to understand how this is accomplished. As I pointed out, an auction takes away consumer surplus from high-valuation types. But in the Shao-Zhou framework there is an upside to this. Because the money extracted will be used to pay off the other agent, raising his consumer surplus. So you want to at least use some auction elements in the mechanism.
One common theme in my analysis and theirs is in fact a deep and under-appreciated result. You never want to “burn money.” Using an auction is worse than complete rationing because the screening benefits of pricing is outweighed by the surplus lost due to the payments to the seller. Using the Shao-Zhou mechanism is optimal precisely becuase it finds a clever way to redirect those payments so no money is burned. By the way this is also an important theme in David Miller’s work on dynamic mechanisms. See here and here.
Finally, we can verify that the Shao-Zhou mechanism would no longer be optimal if we adapted it to satisfy the constraint that the loser doesnt receive any money. It’s easy to do this based on the revenue equivalence theorem. In the Shao-Zhou mechanism an agent with zero value gets expected utility equal to 1/8 due to the payments he receives. We can subtract utility of 1/8 from all types and obtain an incentive-compatible mechanism with the same allocation rule. This would be just enough to satisfy my constraint. And then the total surplus will be 5/8-2/8 = 3/8 which is less than the 1/2 of the complete rationing mechanism. That’s another expression of the losses associated with using even the very crude screening in the Shao-Zhou mechanism.
Next let me tell you about my correspondence with Kane Sweeney. He constructed a simple example where an auction outperforms rationing. It works like this. Suppose that each bidder either had a very low willingness to pay, say 50 cents, or a very high willingness to pay, say $1,000. If you ration then expected surplus is about $500. Instead you could do the following. Run a second-price auction with the following modification to the rules. If both bid $1000 then toss a coin and give the ticket to the winner at a price of $1. This mechanism gives an expected surplus of about $750.
Basically this type of example shows that the monotone hazard rate assumption is important for the superiority of rationing. To see this, suppose that we smooth out the distribution of values so that types between 50 cents and $1000 have very small positive probability. Then the hazard rate is first increasing around 50 cents and then decreasing from 50 cents all the way to $1000. So you want to pool all the types above 50 cents but you want to screen out the 50-cent types. That’s what Kane’s mechanism is doing.
I would interpret Kane’s mechanism as delivering a slightly nuanced version of the rationing message. You want to screen out the non-serious bidders but ration among all of the serious bidders.
We are reading it in my Behavioral Economics class and so far we have finished the first 5 chapters which make up Part I of the book “Anticipating Future Preferences.” In Ran Spiegler’s typical style, perfectly crafted simple models are used to illustrate deep ideas that lie at the heart of existing frontier research and, no doubt, future research this book is bound to inspire.
A nod also has to go to Kfir Eliaz who is Rani’s longtime collaborator on many of the papers that preceded this book. Indeed, in a better world they would form a band. It would be a early ’90s geek-rock band like They Might Be Giants or whichever band it was that did The Sweater Song. I hereby name their band Hasty Belgium. (Names of other bands here.)
Many of the examples in the book are referred to as “close variations of” or “free variations of” papers in the literature. And Rani has even written a paper that he calls “a cover version of” a paper by Heidhues and Koszegi. So to continue the metaphor, I offer here some liner notes for the book.
In chapter 5 there is a fantastic distillation of a model due to Michael Grubb that explains Netflix pricing. Conventional models of price discrimination cannot explain three-part tariffs: a membership fee, a low initial per-unit price, and then a high per-unit price that kicks in above some threshold quantity. (Netflix is the extreme case where the initial price per movie is zero, and above some number the price is infinite.) Rani constructs the simplest and clearest possible model to show how such a pricing system is the optimal way to take advantage of consumers who are over-confident in their beliefs about their future demand.
A conventional approach to pricing would be to set price equal to marginal cost, thereby incentivizing the consumer to demand the efficient quantity, and then adding on a membership fee that extracts all of his surplus. You can think of this as the Blockbuster model. The Netflix model by contrast reduces the per-unit price to zero (up to some monthly allotment) but raises the membership fee.
Here’s how that increases profits. Many of us mistakenly think we will watch lots of movies. Netflix re-arranges the pricing structure so that the total amount we expect to pay when we watch all of those movies is the same as in the Blockbuster model. Just now we are paying it all in the form of a membership fee. If it turns out that we watch as many movies as we anticipated, we are no better or worse off and neither is Netflix.
But in fact most of us discover that we are always too busy to watch movies. In the Blockbuster system when that happens we don’t watch movies and so we don’t pay per-unit prices and we Blockbuster doesn’t make much money. In the Netflix system it doesn’t matter how many movies we watch, because we already paid.
My only complaint about the book is the title. (Not for those reasons, no.) The term “Bounded Rationality” has fallen out of favor and for good reason. It’s pejorative and it doesn’t really mean anything. A more contemporary title would have been Behavioral Industrial Organization. Now I agree that “Behavioral” is at least as meaningless as “Bounded Rationality.” Indeed it has even less meaning. But that’s a virtue because we don’t have any good word for whatever “Bounded Rationality” and “Behavioral” are supposed to mean. So I prefer a word that has no meaning at all than “Bounded Rationality” which suggests a meaning that is misplaced.
Predict which flights will be overbooked, buy a ticket, trade it in for a more valuable voucher.
Still, there are some travelers who see the flight crunch as a lucrative opportunity. Among them is Ben Schlappig. The 20-year-old senior at the University of Florida said he earned “well over $10,000” in flight vouchers in the last three years by strategically booking flights that were likely to be oversold in the hopes of being bumped.
“I don’t remember the last time I paid over $100 for a ticket,” he boasted. His latest coup: picking up $800 in United flight vouchers after giving up his seat on two overbooked flights in a row on a trip from Los Angeles to San Francisco. Or as he calls it, “a double bump.”
The full article has a rundown of all the tricks you need to know to get into the bumpee business. I was surprised to read this.
Most of those people volunteered to give up their seats in return for some form of compensation, like a voucher for a free flight. But D.O.T. statistics also show that about 1.09 of every 10,000 passengers was bumped involuntarily.
On the other hand, it is not surprising because involuntary bumping only lowers the value of a ticket. Monetary (or voucher) compensation can be recouped in the price of the ticket (in expectation.)
Garrison grab: Daniel Garrett.
A former academic economist and game theorist is now the Chief Economic Advisor in the Ministry of Finance in India. His name is Kaushik Basu. Via MR, here is a policy paper he has just written advising that the giving of bribes should be de-criminalized.
The paper puts forward a small but novel idea of how we can cut down the incidence of bribery. There are different kinds of bribes and what this paper is concerned with are bribes that people often have to give to get what they are legally entitled to. I shall call these ―harassment bribes.‖ Suppose an income tax refund is held back from a taxpayer till he pays some cash to the officer. Suppose government allots subsidized land to a person but when the person goes to get her paperwork done and receive documents for this land, she is asked to pay a hefty bribe. These are all illustrations of harassment bribes. Harassment bribery is widespread in India and it plays a large role in breeding inefficiency and has a corrosive effect on civil society. The central message of this paper is that we should declare the act of giving a bribe in all such cases as legitimate activity. In other words the giver of a harassment bribe should have full immunity from any punitive action by the state.
This is not just crazy talk, there is some logic behind it fleshed out in the paper. If giving a bribe is forgiven but demanding a bribe remains a crime, then citizens forced to pay bribes for routine government services will have an incentive to report the bribe to the authorities. This will discourage harrassment bribery.
The obvious question is whether the bribe-enforcement authority will itself demand bribes. To whom does a citizen report having given a bribe to the bribe authority? At some point there is a highest bribe authority and it can demand bribes with impunity. With that power they can extract all of the reporter’s gains by demanding it as a bribe.
Worse still they can demand an additional bribe from the original harasser in return for exonerating her. The effect is that the harasser sees only a fraction of the return on her bribe demands. This induces her to ask for even higher bribes. Higher bribes means fewer citizens are able to pay them and fewer citizens receive their due government services.
The bottom line is that in an economy run on bribes you want to make the bribes as efficient as possible. That may mean encouraging them rather than discouraging them.
There are a few basic features that Grant Achatz and Nick Kokonas should build into their online ticket sales. First, you want a good system to generate the initial allocation of tickets for a given date, second you want an efficient system for re-allocating tickets as the date approaches. Finally, you want to balance revenue maximization against the good vibe that comes from getting a ticket at a non-exorbitatnt price.
- Just like with the usual reservation system, you would open up ticket sales for, say August 1, 3 months in advance on May 1. It is important that the mechanism be transparent, but at the same time understated so that the business of selling tickets doesn’t draw attention away from the main attractions: the restuarant and the bar. The simple solution is to use a sealed bid N+1st price auction. Anyone wishing to buy a ticket for August 1 submits a bid. Only the restaurant sees the bid. The top 100 bidders get tickets and they pay a price equal to the 101st highest bid. Each bidder is informed whether he won or not and the final price. With this mechanism it is a dominant strategy to bid your true maximal willingness to pay so the auction is transparent, and all of the action takes place behind the scenes so the auction won’t be a spectacle distracting from the overall reputation of the restaurant.
- Next probably wants to allow patrons to buy at lower prices than what an auction would yield. That makes people feel better about the restaurant than if it was always trying to extract every last drop of consumer’s surplus. Its easy to work that into the mechanism. Decide that 50 out of 100 seats will be sold to people at a fixed price and the remainder will be sold by auction. The 50 lucky people will be chosen randomly from all of those whose bid was at least the fixed price. The division between fixed-price and auction quantities could easily be adjusted over time, for different days of the week, etc.
- The most interesting design issue is to manage re-allocation of tickets. This is potentially a big deal for a restaurant like Next because many people will be coming from out of town to eat there. Last-minute changes of plans could mean that rapid re-allocation of tickets will have a big impact on efficiency. More generally, a resale market raises the value of a ticket because it turns the ticket into an option. This increases the amount people are willing to bid for it. So Next should design an online resale market that maximizes the efficiency of the allocation mechanism because those efficiency gains not only benefit the patrons but they also pay off in terms of initial ticket sales.
- But again you want to minimize the spectacle. You don’t want Craigslist. Here is a simple transparent system that is again discreet. After the original allocation of tickets by auction, anyone who wishes to purchase a ticket for August 1 submits their bid to the system. In addition, anyone currently holding a ticket for August 1 has the option of submitting a resale price to the system. These bids are all kept secret internally in the system. At any moment in which the second highest bid exceeds the second lowest resale price offered, a transaction occurs. In that transaction the highest bidder buys the ticket and pays the second-highest bid. The seller who offered the lowest price sells his ticket and receives the second lowest price.
- That pricing rule has two effects. First, it makes it a dominant strategy for buyers to submit bids equal to their true willingness to pay and for sellers to set their true reserve prices. Second, it ensures that Next earns a positive profit from every sale equal to the difference between the second-highest bid and the second-lowest resale price. In fact it can be shown that this is the system that maximizes the efficiency of the market subject to the constraint the market is transparent (i.e. dominant strategies) and that Next does not lose money from the resale market.
- The system can easily be fine-tuned to give Next an even larger cut of the transactions gains, but a basic lesson of this kind of market design is that Next should avoid any intervention of that sort. Any profits earned through brokering resale only reduces the efficiency of the resale market. If Next is taking a cut then a trade will only occur if the gains outweigh Next’s cut. Fewer trades means a less efficient resale market and that means that a ticket is a less flexible asset. The final result is that whatever profits are being squeezed out of the resale market are offset by reduced revenues from the original ticket auction.
- The one exception to the latter point is the people who managed to buy at the fixed price. If the goal was to give those people the gift of being able to eat at Next for an affordable price and not to give them the gift of being able to resell to high rollers, then you would offer them only the option to sell back their ticket at the original price (with Next either selling it again at the fixed price or at the auction price, pocketing the spread.) This removes the incentive for “scalpers” to flood the ticket queue, something that is likely to be a big problem for the system currently being used.
- A huge benefit of a system like this is that it makes maximal use of information about patrons’ willingness to pay and with minimal effort. Compare this to a system where Next tries to gauge buyer demand over time and set the market clearing price. First of all, setting prices is guesswork. An auction figures out the price for you. Second, when you set prices you learn very little about demand. You learn only that so many people were willing to pay more than the price. You never find out how much more than that price people would have been willing to pay. A sealed bid auction immediately gives you data on everybody’s willingness to pay. And at every moment in time. That’s very valuable information.
The Boston Globe profiles Al Roth, who together with Atila Abdulkadiroglu, Tayfun Sonmez, Utku Unver and many others are leading the most important development in Microeconomics right now: market design.
Roth’s most recent project is helping to set up a nationwide kidney exchange, which would make it possible to find even more matches than the existing regional networks can find on their own. Running this national network has been a bureaucratic nightmare, and since it opened for business last fall, only two transplants have actually been carried out under its auspices. The problem is that depending on blood type, it can be hard or easy to find someone a compatible kidney. And when a hospital has an easy-to-match patient, its administrators are more likely to withhold that information from the other hospitals in the network because they’d rather do the transplant themselves, and get the business.
Alex Tabarrok wrote a thought-provoking piece on some ideas to increase kidney donation.
A distinguished colleague (whom I will spare the outing) teaches in the lecture room after me. I received this email from him:
Subject: Any chance you could erase the Leverdome blackboard?
Or is this a Coase theorem thing?
Not the Coase Theorem, no. The Coase Theorem is all about parties coming together to form agreeements that enhance welfare. No, my dust-bound comrade this is much simpler seeing as how aggregate welfare is improved by the unilateral deviation of a single agent, namely me.
You see those days when we were following the conventional norm, according to which each Professor erases the chalkboard after his own lecture leaving a clean board for the next class, we were leaving a free lunch just sitting there on the table. Because any one of us could have changed course, leaving the board to be erased by the next guy before his class, thus triggering a switch to the superior erase-before convention.
Now as I am sure I don’t have to explain to you, once the convention is settled every Professor erases exactly once per day. So nobody is any worse off. But as you have by now noticed, that one particular Professor who initiated the switch avoids erasing that one time and is therefore strictly better off. A Pareto improvement! but of course you are now well-trained at spotting those having just yesterday surveyed my lecture notes covering that very subject as you were erasing them from the Leverdome chalkboard.






