You are currently browsing the tag archive for the ‘economics’ tag.
From MR, I read this story about how the San Francisco smart parking meters will be designed to adjust meter rates in real time according to demand. There wasn’t much detail there but this bit gave me pause.
Rates at curbside meters in the project area will be adjusted block by block in an attempt to have at least one parking space available at any time on a given block.
(My approach to blogging is to send myself emails whenever I have an idea, then sort through those emails when i have the time and decide what to write about. Some ideas have gathered dust over the past year and its time to use them or lose them.)
When do you give up on a book? It’s an optimal stopping problem with an experimentation aspect. The more you read the more uncertainty gets resolved the more you learn whether the book will be rewarding enough to finish. You stop reading when the expected continuation value, which includes the option value of quitting later, falls below the value of the next book in your queue.
So here’s an interesting question. Is that more likely to happen at the beginning or near the end of a book? Ignore the irrational desire to complete a book just because you have already sunk a lot of time into reading it. (But do include the payoff from finding out what happens with all the threads you have followed along the way.)
It easily could be that the most likely time to quit reading a book is close to the end. Indeed the following is a theorem. For any belief about the flow value of the book going forward, if that belief leads you to dump the book near the beginning, then that same belief must lead you to dump the book nearer the end. Because the closer to the end of the book the option value is lower and there is even less chance that it will get better.
It sounds wrong because probably even the most ruthless book trashers rarely quit near the end. But there’s no contradictipon. Even if the option value rule implies that the threshold quality required to continue reading is increasing as you get deeper into the book, it can still be true that statistically you most often quit reading near the beginning of a book. Because conditional on a book being dump-worthy, you are more likely to figure that out and cross that threshold for the first time early on rather than later.
Two cars are for sale on the used car lot. Car A has 62,847 miles on it and is listed for $7500. Car B has 89,113 miles on it and is listed for $5600. Which car would you buy?
Did you think about it? OK, now your answer is not really important, but without looking back again at the numbers, answer this question: what was the third digit of the mileage reading of the two cars? You probably don’t remember because you probably didn’t pay close attention.
Apparently this inattention has been priced into the used car market. For example, used cars with 39,900 miles on the odometer sell for significantly more than cars with 40,000 miles but not much less than cars with 39,000 miles. This is the starting point of a paper by Nicola Lacetera, Devin Pope, and Justin Syndor. They go on to estimate a structural model of attentiveness through the magnitude of the price discontinuities.
In fact, as they observe, sellers seem to understand this effect very well. Here is a graph showing the volume of used cars sold wholesale at different odometer readings. There is a huge spike in volume at the right end of an interval and a huge drop on the other side.
Indeed you could imagine that this kind of effect manifests even in a world where most buyers have unlimited attentiveness. Everybody knows that the price drops at the 1,000 mile mark so used cars with 31,100 miles are leased to rental agencies until they reach the 31,900 mark. The only used cars for sale with 31,100 miles on them are the lemons that the seller is willing to part with at the price that older cars are selling for. Buyers know this so they treat all cars with between 31,000-31,999 miles as equivalent to the oldest car in that range.
(The next time you see zeros roll over on your odometer you will understand why that always feels like a watershed event. Your car is only a mile older but it just took a discrete drop in value.)
In January I made a prediction and kept it a secret. Now I can tell you what it was.
For fun I wanted to see how well I could predict the Economics PhD job market. As a simple test I tried to predict who would be selected for the Review of Economic Studies Tour, a kind of all-star team of new PhDs. Here was my predicted list
- Gharad Bryan (Yale)
- Matt Elliot (Stanford)
- Neale Mahoney (Stanford)
- Alex Torgovitsky (Yale)
- Glen Weyl (Harvard)
- Alex Wolitzky (MIT)
- Alessandra Voena (Stanford)
- Kei Kawai (Northwestern)
And here is the actual list (the RES Tour website is here.)
- Alex Wolitzky (going to Stanford?)
- Daniel Keniston (MIT, going to Yale)
- Mar Reguant (MIT, going to Stanford GSB)
- Kei Kawai (going to Princeton?)
- Alex Torgovitsky (coming to NU)
- Alessandra Voena (going to Chicago?)
- Peter Koudijs (Pompeu Fabra, going to Stanford GSB)
So I got 4 out of 8. (There are usually 7 people, and I predicted 7 originally, but I updated it the next day adding Kawai to the list. I had been so involved in recruiting students from other schools that I had completely forgotten about our own star student Kei Kawai and as soon as I remembered him I added him to the list.)
I previously blogged about Torgovitsky, Mahoney, and Koudijs.
You can see why I wanted to keep the prediction a secret until the market was over. You can verify my prediction by cutting and pasting the text in this file and generating its unique SHA1 hash (a digital signature) with this web tool and cross check that it is the hash that I originally posted here, and that I tweeted here, and that is reproduced below.
f502acfb48395d6ab223ca30803f98b9bd6fd6ce
(Here is the original prediction before I added Kawai, and here is the hash for that one.)
I did this as an experiment to see how easy it is to predict job market outcomes. At the time I made this prediction I had read the files of each of these candidates and interviewed most of them. I didnt know where else they had interviews and I made the prediction before the stage of flyout interviews so I had little information about how their job market was going overall.
Getting half right is about what I expected. To me this is evidence that the market is hard to predict even after having interviewed the candidates. In particular I take it as evidence against the cynical view that the market herds on certain candidates early in the process. Indeed I would not have changed my prediction much even a week or two later when flyout schedules were in place.
Incidentally the Review Tour rosters say something about the strength of PhD programs. Here’s a breakdown of the last 6 years and where the tourists received their PhDs:
MIT 12
Harvard 7
NWU 4
Yale 3
Stanford 3
Stanford GSB 2
BU, Duke, LSE, Michigan, NYU, Penn, Princeton, Ohio State, Stern, UPF, UCL — 1 each
Northwestern is doing very well! (In addition to producing stars, we also do well hiring them. 3 from the tour in the past 6 years, including Alex Torgovitsky this year, an outstanding hire.)
Also I understand that a team is at work creating the web tool that I suggested here for creating and managing secret predictions. If and when it hits I will announce it here.
I got this tweet from @gappy3000
In this NYT discussion on US inequality http://nyti.ms/fIIb1N nobody mentions the Benaou-Tirole paper. Depressing. http://bit.ly/fbUek8
I won’t rectify the omission but I will talk about an earlier paper “Social Mobility and Redistributive Politics” by Thomas Piketty which has a similar idea and is based on a much simpler logic. The idea is that these three characteristics seem to come in a bundle:
- Wealthy family
- Belief that individual effort matters for success
- Opposition to redistributive social policy
and that less wealthy families are characterized by support for redistribution and the belief that family background (i.e. social class) matters more for success.
Piketty developed a simple theory for why these distinct worldviews can coexist in the same world even though one of them must be wrong and even though all households have the same intrinsic preference for equity. It’s based on the observation that there is an identification problem in sorting out whether family background or individual success matters and once a family falls into one of these categories, no amount of experience can change that.
To see the problem, suppose that your family’s history has led you to believe that individual effort matters very little for success. Then you have little incentive to work hard and you will be relatively unsuccessful. You will notice that some people are successful but you will attribute that to the luck of their social class. (However it easily could be that you are wrong and their success is due to their hard work.) Indeed even in your own family history you will record some episodes of upward mobility, but because your family doesn’t work hard you rightly attribute that to luck too. In a world with such inequality and where effort matters little you see a strong moral justification for redistribution and little incentive cost.
On the other hand if your family has learned, and teaches you, that effort matters you will optimally work hard. You will be more successful than average and you will attribute this to your hard work. (It easily could be that you are wrong and in fact the source of your success is just your social class.) You will see that other families are less successful on average and as a result you see the same moral reason for redistribution, but you think that effort matters and, understanding how redistribution reduces the incentive to work hard, you favor less redistribution than the median voter.
How does this contribute to the debate in the link above? This indeterminacy in attitudes toward redistribution means that differences across countries and over time are essentially arbitrary and due to factors that we can call culture. Inequality is high in the United States and low in Europe. You are tempted to say “and yet there is less outrage about inequality in the US than there is in Europe” but in fact you should say “precisely because there is…”
Homburg Haul to Pierre Yared who first told me about the Piketty paper.
You probably heard about the Facebooks of China. Facebook and Twitter are blocked there and filling the vacuum are some homegrown social networks. Obviously the biggest issue with this in the particular instance of China is freedom of information and expression, so the thought experiment I will propose requires a little abstraction to focus on a separate issue.
Sites like Renren and Kaixin001 are microcosms of today’s changing China — they copy from the West, but then adjust, add, and, yes, even innovate at a world-class level, ultimately creating something unquestionably modern and distinctly Chinese. It would not be too grand to say that these social networks both enable and reflect profound generational changes, especially among Chinese born in the 1980s and 1990s. In a society where the collective has long been emphasized over the individual, first thanks to Confucian values and then because of communism, these sites have created fundamentally new platforms for self-expression. They allow for nonconformity and for opportunities to speak freely that would be unusual, if not impossible, offline. In fact, these platforms might even be the basis for a new culture. “A good culture is about equality, acceptance, and affection,” says Han Taiyang, 19, a psychology major at Tsinghua University who uses Renren constantly. “Traditional thinking restrains one’s fundamental personality. One must escape.”
Goods (like social networks) that have bandwagon effects create the greatest value when the size of the market is largest. But that same effect can cause convergence on a bad standard. One argument, very narrow of course, for trade barriers is to prevent that from happening. We allow each country to develop critical mass in their own standard in isolation before we reduce trade barriers and allow them to compete.
Grading still hangs over me but teaching is done. So, I finally had time to read Kiyotaki Moore. It’s been on my pile of papers to read for many, many years. But it rose to the top because, first, my PhD teaching allowed me to finally get to Myerson’s bargaining chapter in his textbook and Abreu-Gul’s bargaining with commitment model and, second, because Eric Maskin recommends it as one of his key papers for understanding the financial crisis. So, some papers in my queue were cleared out and Kiyotaki-Moore leaped over several others.
I see why the paper has over 2000 cites on Google Scholar.
The main propagation mechanism in the model relies on the idea that credit-constrained borrowers borrow against collateral. The greater the value of collateral “land” , the greater the amount they can borrow. So, if for some reason next period’s price of land is high, the greater is the amount the borrower can borrow against his land this period. Suppose there is an unexpected positive shock to the productivity of land. This increases the value of land and hence its price. This capital gain increases borrowing. An increase in the value of land increases economic activity. It also increases demand for land and hence the price of land. This can choke off some demand for land. The more elastic the supply of land, the smaller is the latter dampening effect. So there can be a significant multiplier to a positive shock to technology.
(Why are borrowers constrained in their borrowing by the value of their land and rather than the NPV of their projects? Kiyotaki-Moore rely on a model of debt of Hart and Moore to justify this constraint. While Hart-Moore is also in my pile, I did not finally have time to read it. I did note they have an extremely long Appendix to justify the connection between collateral and borrowing! The main idea in Hart Moore is that an entrepreneur can always walk away from a project and hold it up. As his human capital is vital for the project’s success, he will be wooed back in renegotiation. The Appendix must argue that he captures all the surplus above the liquidation value of the land. Hence, the lender will only be willing to lend up to value of collateral to avoid hold up.)
But how do we get credit cycles? As the price of land rises, the entrepreneurs acquire more land. This increases the price of land. They also accumulate debt. The debt constrains their ability to borrow and eventually demand for land declines and its price falls. A cycle. Notice that this cycle is not generated by shocks to technology or preferences but arises endogenously as land and debt holdings vary over time! I gotta think about this part more….
The MILQs at Spousonomics riff on the subject of “learned incompetence.” It’s the strategic response to comparative advantage in the household: if I am supposed to specialize in my comparative advantage I am going to make sure to demonstrate that my comparative advantage is in relaxing on the couch. Examples from Spousonomics:
Buying dog food. My husband has the number of the pet food store that delivers and he knows the size of the bag we buy. It would be extremely inconvenient for me to ask him for that number.
Sweeping the patio. He’s way better at getting those little pine tree needles out of the cracks. I don’t know how he does it!
A related syndrome is learned ignorance. It springs from the marital collective decision-making process. Let’s say we are deciding whether to spend a month in San Diego. Ideally we should both think it over, weigh and discuss the costs and benefits and come to an agreement. But what’s really going to happen is I am going to say yes without a moment’s reflection and her vote is going to be the pivotal one.
The reason is that, for decisions like this that require unanimity, my vote is only going to count when she is in favor. Now she loves San Diego, but she doesn’t surf and so she can’t love it nearly as much as me. So she’s going to put more weight on the costs in her cost-benefit calculation. I care about costs too but I know that conditional on she being in favor I am certainly in favor too.
Over time spouses come to know who is the marginal decision maker on all kinds of decisions. Once that happens there is no incentive for the other party to do any meaningful deliberation. Then all decisions are effectively made unilaterally by the person who is least willing to deviate from the status quo.
So leads us to the remarkable story of Imperial College’s self-effacing head librarian, pitted in a battle of nerves against the publisher of titles like the Lancet. She is leading Research Libraries UK (RLUK), which represents the libraries of Russell Group universities, in a public campaign to pressure big publishers to end up-front payments, to allow them to pay in sterling and to reduce their subscription fees by 15%. The stakes are high, library staff and services are at risk and if an agreement or an alternative delivery plan is not in place by January 2nd next year, researchers at Imperial and elsewhere will lose access to thousands of journals. But Deborah Shorley is determined to take it to the edge if necessary: “I will not blink.”
The article is here. Part of what’s at stake is the so called “Big Deal” in which Elsevier bundles all of its academic journals and refuses to sell subscriptions to individual journals (or sells them only at exorbitant prices.) Edlin and Rubinfeld is a good overview of the law and economics of the Big Deals.
Boater Bow: Not Exactly Rocket Science.
An important role of government is to provide public goods that cannot be provided via private markets. There are many ways to express this view theoretically, a famous one using modern theory is Mailath-Postlewaite. (Here is a simple exposition.) They consider a public good that potentially benefits many individuals and can be provided at a fixed per-capita cost C. (So this is a public good whose cost scales proportionally with the size of the population.)
Whatever institution is supposed to supply this public good faces the problem of determining whether the sum of all individual’s values exceeds the cost. But how do you find out individual’s values? Without government intervention the best you can do is ask them to put their money where their mouths are. But this turns out to be hopelessly inefficient. For example if everybody is expected to pay (at least) an equal share of the cost, then the good will produced only if every single individual has a willingness to pay of at least C. The probability that happens shrinks to zero exponentially fast as the population grows. And in fact you can’t do much better than have everyone pay an equal share.
Government can help because it has the power to tax. We don’t have to rely on voluntary contributions to raise enough to cover the costs of the good. (In the language of mechanism design, the government can violate individual rationality.) But compulsory contributions don’t amount to a free lunch: if you are forced to pay you have no incentive to truthfully express your true value for the public good. So government provision of public goods helps with one problem but exacerbates another. For example if the policy is to tax everyone then nobody gives reliable information about their value and the best government can do is to compare the cost with the expected total value. This policy is better than nothing but it will often be inefficient since the actual values may be very different.
But government can use hybrid schemes too. For example, we could pick a representative group in the population and have them make voluntary contributions to the public good, signaling their value. Then, if enough of them have signaled a high willingness to pay, we produce the good and tax everyone else an equal share of the residual cost. This way we get some information revelation but not so much that the Mailath Postlewaite conclusion kicks in.
Indeed it is possible to get very close to the ideal mechanism with an extreme version of this. You set aside a single individual and then ask everyone else to announce their value for the public good. If the total of these values exceeds the cost you produce the public good and then charge them their Vickrey-Clarke-Groves (VCG) tax. It is well known that these taxes provide incentives for truthful revelation but that the sum of these taxes will fall short of the cost of providing the public good. Here’s where government steps in. The singled-out agent will be forced to cover the budget shortfall.
Now obviously this is bad policy and is probably infeasible anyway since the poor guy may not be able to pay that much. But the basic idea can be used in a perfectly acceptable way. The idea was that by taxing an agent we lose the ability to make use of information about his value so we want to minimize the efficiency loss associated with that. Ideally we would like to find an individual or group of individuals who are completely indifferent about the public good and tax them. Since they are indifferent we don’t need their information so we lose nothing by loading all of the tax burden on them.
In fact there is always such a group and it is a very large group: everybody who is not yet born. Since they have no information about the value of a public good provided today they are the ideal budget balancers. Today’s generation uses the efficient VCG mechanism to decide whether to produce the good and future generations are taxed to make up any budget imbalance.
There are obviously other considerations that come into play here and this is an extreme example contrived to make a point. But let me be explicit about the point. Balanced budget requirements force today’s generation to internalize all of the costs of their decisions. It is ingrained in our senses that this is the efficient way to structure incentives. For if we don’t internalize the externalities imposed on subsequent generations we will make inefficient decisions. While that is certainly true on many dimensions, it is not a universal truth. In particular public goods cannot be provided efficiently unless we offload some of the costs to the next generation.
It occurs to me that in our taxonomy of varieties of public goods, we are missing a category. Normally we distinguish public goods according to whether they are rival/non-rival and whether they are excludable/non-excludable. It is generally easier to efficiently finance excludable public goods because people by the threat of exclusion you can get users to reveal how much they are willing to pay for access to the public good.
I read this article about Piracy of sports broadcasts and I started to wonder what effect it will have on the business of sports. Free availability of otherwise exclusive broadcasts mean that professional sports change from an excludable to a non-excludable public good. This happened to software and music but unique aspects of those goods enable alternative revenue sources (support in the case of software, live performance in the case of music.)
For sports the main alternative is advertising. And the only way to ensure that the ads can’t be stripped out of the hijacked broadcast, we are going to see more and more ads directly projected onto the players and the field.
And then I started wondering what would be the analogue of advertising to support other non-excludable public goods. The key property is that you cannot consume the good without being exposed to the ad. What about clean air? National defense?
But then I realized that there is something different about these public goods. Not only are they not excludable– a user cannot be prevented from using it, but they are not avoidable — the user himself cannot escape the public good. And there is no reason to finance unavoidable public goods by any means other than taxation.
Here’s the point. If the public good is avoidable, you can increase the user tax (by bundling ads) and trust that those who don’t value the public good very much will stop using it. Given the level of the tax it would be inefficient for them to use it. Knowing that this inefficiency can be avoided you have more flexibility to raise the tax, effectively price discriminating high-value users.
If the public good is unavoidable, everyone pays whether you use ads or just taxation (uncoupled with usage), so there really isn’t any difference.
So this category of an avoidable public good seems a useful one. Can you think of other examples of non-excludable but avoidable public goods? Sunsets: avoidable. National defense: unavoidable.
Bryan Caplan wonders whether economic theory is on the decline. Here are some signs I have noticed:
- Econometrica, the most theory-oriented of the top 4 journals has a well-publicized mission to publish more applied, general interest articles, and this is indeed happening. This comes at the expense of pure theory as well as theoretical econometrics.
- The new PhD market was, on the whole, difficult for theorists this year. Strong candidates from Yale, Stanford, NYU and Princeton were placed much lower than expectations, some without a job offer in North America as of yet. As far as I can tell, there will be only two junior theorists hired at top 5 departments.
But there are many positive signs too
- Theorists have been recruiting targets for high-profile private sector jobs. Michael Schwarz and Preston McAfee at Yahoo!, Susan Athey at Microsoft for example. In addition the research departments in these places are full of theorists-on-leave.
- Despite some overall weakness, theory is and always has been well represented at the top of the junior market. This year Alex Wolitzky, as pure a theorist as there is, is the clear superstar of the market. Here is the list of invitees to the Review of Economics Studies Tour from previous years. This is generally considered to be an all-star team of new PhDs in each year. Two theorists out of seven per year on average. (No theorist last year though.)
- In recent years, two new theory journals, Theoretical Economics and American Economic Journal: Microeconomics, have been adopted by the leading Academic Societies in economics. These journals are already going strong.
- Market design is an essentially brand new field and one of the most important contributions of economics in recent years. It is dominated by theorists.
In my opinion there are some signs of change but, correctly interpreted, these are mostly for the better. Decision theory, always the most esoteric of subfields has moved to the forefront as a second wave of behavioral economics. Macroeconomics today is more heavily thoery-oriented than ever. Theorists (and theory journals) are drawn away from pure theory and toward applied theory not because pure theory has diminished in any absolute sense, but rather because applied theory has become more important than ever.
Professor Caplan offers some related observations in his commentary:
…mathematicians masquerading as economists were never big at GMU, and it’s hard to see how they could do well in the blogosphere either.
I am sure he is not talking about Sandeep and me because we are just as bad at math as all of the other bloggers who pretend to be economists. But just in case he is, I invite him to take a look around. Finally,
My econjobrumors insider tells me that its countless trolls are largely frustrated theorists who feel cheated of the respect they think the profession owes them. Speculation, yes, but speculation born of years of study of their not-so-silent screams.
He is talking about the people who anonymously post sometimes hilarious, sometimes obnoxious vitriol on that outpost of grad student angst known as EJMR. I wonder how he could possibly know the research area of anonymous posters to that web site? Among all the economists who feel cheated out of the respect that they think the profession owes them why would it be that theorists are the most likely to troll?
My duaghter’s 4th grade class read the short story “The Three Questions” by Tolstoy (a two minute read.) This afternoon I led a discussion of the story. Here are my notes.
There is a King who decides that he needs the answers to three questions.
- What is the best time to act?
- Who is the most important person to listen to?
- What is the most important thing to do?
Because if he knew the answers to these questions he would never fail in what he set out to do. He sends out a proclomation in his Kingdom offering a reward to anyone who can answer these questions but he is disappointed because although many offer answers…
All the answers being different, the King agreed with none of them, and gave the reward to none.
So instead he went to see a hermit who lived alone in the Wood and who might be able to answer his questions. The King and the hermit spend the day in silence digging beds in the ground. Growing impatient, the King confronts the hermit and makes one final request for the answers to the King’s questions. But before the hermit is able to respond they are interrupted by a wounded stranger who needs their help. They bandage the stranger and lay him in bed and the King himself falls asleep and does not awake until the next morning.
At it turns out the stranger was intending to murder the King but was caught by the King’s bodyguard and stabbed. Unkowingly the King saved his enemy’s life and now the man was eternally grateful and begging for the King’s forgiveness. The King returns to the hermit and asks again for the answers to his questions.
“Do you not see,” replied the hermit. “If you had not pitied my weakness yesterday, and had not dug those beds for me, but had gone your way, that man would have attacked you, and you would have repented of not having stayed with me. So the most important time was when you were digging the beds; and I was the most important man; and to do me good was your most important business. Afterwards when that man ran to us, the most important time was when you were attending to him, for if you had not bound up his wounds he would have died without having made peace with you. So he was the most important man, and what you did for him was your most important business. Remember then: there is only one time that is important– Now! It is the most important time because it is the only time when we have any power. The most necessary man is he with whom you are, for no man knows whether he will ever have dealings with any one else: and the most important affair is, to do him good, because for that purpose alone was man sent into this life!”
We are left to decide for ourselves what the King will do with these answers. The King abhors uncertainty. This is why he discarded the many different answers given by the learned men in his Kingdom. The simplicity of the hermit’s advice is bound the appeal to the King. It is certainly a rule that can be applied in any situation. And it is indeed motivated by acknowledgement of uncertainty in the extreme. The Here and Now are the only certainties. And it follows from uncertainty about where you will be in the future, with whom you will be, and what options will be before you that the Here and Now are the most important.
(The hermit is not only outlining a foundation for hyperbolic discounting, but also a Social Welfare analog. Your social welfare function should heavily discount all people except those who are before you right now.)
But what would come of the King were he to follow the advice of the hermit? Imagine what it would be like to live like that. Would you ever even make it to the bathroom to brush your teeth? How many opportunities and people would distract you along the way?
If the hermit’s advice were any good then surely the hermit himself must follow it. Perhaps the hermit was a King once.
You probably know the Ellsberg urn experiment. In the urn on the left there are 50 black balls and 50 red balls. In the urn on the right there are 100 balls, some of them red and some of them black. No further information about the urn on the right is given. Subjects are allowed to pick an urn and bet on a color. They win $1 if the ball drawn from the urn they selected is the color they bet.
Subjects display aversion to ambiguity: they strictly prefer to bet on the left urn where the odds are known than on the right urn where the odds are unknown. This is known as Ellsberg’s paradox, because whatever probabilities you attach to the distribution of balls in the right urn, there is a color you could bet on and do at least as well as the left urn. This experiment revealed a new dimension to attitudes towards uncertainty that has the potential to explain many puzzles of economic behavior. (The most recent example being the job-market paper of Gharad Bryan from Yale who studies the extent to which ambiguity can explain insurance market failures in developing countries.)
Decades and thousands of papers on the subject later, there remains a famous critique of the experiment and its interpretation due to Raiffa. The subjects could “hedge” against the ambiguity in the right urn by tossing a coin to decide whether to bet on red or black. To see the effect of this note that if there are n black balls and (100-n) red balls, then the coin toss means that with 50% probability you bet on black and win with n% probability and with 50% probability you bet you red and with with (100-n)% probability giving you a total probability of winning equal to 50%. Exactly the same odds as the left urn no matter what the actual value of n is. Given this ability to remove ambiguity altogether, the choices of the subjects cannot be interpreted as having anything to do with ambiguity aversion.
Kota Saito begins with the observation that the Raiffa randomization is only one of two ways to remove the ambiguity from the right urn. Another way is to randomize ex post. Hypothetically: first draw the ball, observe its color, and then toss a coin to decide whether to bet on red or black. Like the ex-ante coin tossing, this strategy guarantees that you have a 50% chance of winning. Kota points out that theories that formalize ambiguity assume that these two strategies are viewed equivalently by decision-makers. If a subject is ambiguity averse, then he prefers either form of randomization to the right urn and he views either of them as indifferent to the left urn.
But the distinct timing makes them conceptually different. In the ex ante case, after the coin is tossed and you decide to bet on red, say, you still face ambiguity going forward just as you would have if you chosen to bet on red without tossing a coin. In the ex post case, all of the ambiguity is removed once you have decided how to bet. (There is an old story about Mark Machina’s mom that relates to this. See example 2 here.)
Kota disentangles these objects and models a decision-maker who may have distinct attitudes to these two ways of mixing objective randomization with subjectively uncertain prospects. In particular he weakens the standard axiom which requires that the order of uncertainty resolution doesn’t matter to the decision-maker. With this weaker assumption he is able to derive an elegant model in which a single parameter encodes the decision-makers pattern of ambiguity attitudes. Interestingly, the theory implies that certain patterns will not arise. For example, any decision-maker who satisfies Kota’s axioms and who displays neutrality toward ex post ambiguity must also display neutrality toward ex ante ambiguity. All other patterns are possible. As it happens, this is exactly what is found in an experimental study by Dominiak and Shnedler.
What’s really cool about the paper is that Kota uses exactly the same setup and axioms to derive a theory of fairness in the presence of randomization. A basic question is whether the “fair” thing to do is to toss a coin to decide who gets a prize, or to give each person an identical, independent lottery. Compared to theories of uncertainty attitudes, our models of preferences for fairness are much less advanced and have barely touched on this kind of question. Kota’s model brings that literature very far very fast.
Kota is a Northwestern Phd (to be) who just defended his dissertation today. You could call it a shotgun defense because Kota’s job market was highly unusual. As a 4th year student he was not planning to go on the market until next year, but CalTech discovered him and plucked him off the bench and into the Big Leagues. He starts as an Assistant Professor there in the Fall. Congratulations Kota!
Believe it or not that line of thinking does lie just below the surface in many recruiting discussions. The recruiting committee wants to hire good people but because the market moves quickly it has to make many simultaneous offers and runs the risk of having too many acceptances. There is very often a real feeling that it is safe to make offers to the top people who will come with low probability but that its a real risk to make an offer to someone for whom the competition is not as strong and who is therefore likely to accept.
This is not about adverse selection or the winner’s curse. Slot-constraint considerations appear at the stage where it has already been decided which candidates we like and all that is left is to decide which ones we should offer. Anybody who has been involved in recruiting decisions has had to grapple with this conundrum.
But it really is a phantom issue. It’s just not possible to construct a plausible model under which your willingness to make an offer to a candidate is decreasing in the probability she will come. Take any model in which there is a (possibly increasing) marginal cost of filling a slot and candidates are identified by their marginal value and the probability they would accept an offer.
Consider any portfolio of offers which involves making an offer to candidate F. The value of that portfolio is a linear function of the probability that F accepts the offer. For example, consider making offers to two candidates and
. The value of this portfolio is
where and
are the acceptance probabilities,
and
are the values and
is the cost of hiring one or two candidates in total. This can be re-arranged to
where is the marginal cost of a second hire. If the bracketed expression is positive then you want to include
in the portfolio and the value of doing so only gets larger as
increases. (note to self: wordpress latex is whitespace-hating voodoo)
In particular, if is in the optimal portfolio, then that remains true when you raise
.
It’s not to say that there aren’t interesting portfolio issues involved in this problem. One issue is that worse candidates can crowd out better ones. In the example, as the probability that accepts an offer,
, increases you begin to drop others from the portfolio. Possibly even others who are better than
.
For example, suppose that the department is slot-constrained and would incur the Dean’s wrath if it hired two people this year. If so that you prefer candidate
, you will nevertheless make an offer only to
if
is very high.
In general, I guess that the optimal portfolio is a hard problem to solve. It reminds me of this paper by Hector Chade and Lones Smith. They study the problem of how many schools to apply to, but the analysis is related.
What is probably really going on when the titular quotation arises is that factions within the department disagree about the relative values of and
. If
is a theorist and
a macro-economist, the macro-economists will foresee that a high
means no offer for
.
Another observation is that Deans should not use hard offer constraints but instead expose the department to the true marginal cost curve, understanding that the department will make these calculations and voluntarily ration offers on its own. (When is not too high, it is optimal to make offers to both and a hard offer constraint prevents that.)
The Texas legislature is on the verge of passing a law permitting concealed weapons on University campuses, including the University of Texas where just this Fall my co-author Marcin Peski was holed up in his office waiting out a student who was roaming campus with an assault rifle.
This post won’t come to any conclusions, but I will try to lay out the arguments as I see them. More guns, less crime requires two assumptions. First, people will carry guns to protect themselves and second, gun-related crime will be reduced as a result.
There are two reasons that crime will be reduced: crime pays off less often, and sometimes it leads to shooting. In a perfect world, a gun-toting victim of a crime simply brandishes his gun and the criminal walks away or is apprehended and nobody gets hurt. In that perfect world the decision to carry a gun is simple. If there is any crime at all you should carry a gun becuase there are no costs and only benefits. And then the decision of criminals is simple too: crime doesn’t pay because everyone is carrying a gun.
(In equilibrium we will have a tiny bit of crime, just enough to make sure everyone still has an incentive to carry their guns.)
But the world is not perfect like that and when a gun-carrying criminal picks on a gun-carrying victim, there is a chance that either of them will be shot. This changes the incentives. Now your decision to carry a gun is a trade-off between the chance of being shot versus the cost of being the victim of a crime. The people who will now choose to carry guns are those for whom the cost of being the victim of a crime outweigh the cost of an increased chance of getting shot.
If there are such people then there will be more guns. These additional guns will reduce crime because criminals don’t want to be shot either. In equilibrium there will be a marginal concealed-weapon carrier. He’s the guy who, given the level of crime, is just indifferent between being a victim of crime and having a chance of being shot. Everyone who is more willing to escape crime and/or more willing to face the risk of being shot will carry a gun. Everyone else will not.
In this equilibrium there are more guns and less crime. On the other hand there is no theoretical reason that this is a better outcome than no guns, more crime. Because this market has externalities: there will be more gun violence. Indeed the key endogenous variable is the probability of a shootout if you carry a gun and/or commit a crime. It must be high enough to deter crime.
And there may not be much effect on crime at all. Whose elasticity with respect to increased probability of being shot is larger, the victim or the criminal? Often the criminal has less to lose. To deter crime the probability of a shooting may have to increase by more than victims are willing to accept and they may choose not to carry guns.
There is also a free-rider problem. I would rather have you carry the gun than me. So deterrence is underprovided.
Finally, you might say that things are different for crimes like mugging versus crimes like random shootings. But really the qualitative effects are the same and the only potential difference is in terms of magnitudes. And it’s not obvious which way it goes. Are random assailants more or less likely to be deterred? As for the victims, on the one hand they have more to gain from carrying a gun when they are potentially faced with a campus shooter, but if they plan make use of their gun they also face a larger chance of getting shot.
NB: nobody shot at the guy at UT in September and the only person he shot was himself.
(Regular readers of this blog will know I consider that a good thing.)
The fiscal multiplier is an important and hotly debated measure for macroeconomic policy. If the government spends an additional dollar, a dollar’s worth of output is produced, but in addition the dollar is added to disposable income of the recipients who then spend some fraction of it. More output is produced, etc.
It’s hard to measure the multiplier because observed increases in government spending are endogenous and correlated with changes in output for reasons that have nothing to do with fiscal stimulus.
Daniel Shoag develops an instrument which isolates a random component to state-level government spending changes.
Many US states manage pensions which are defined-benefit plans. Defined benefits means that retirees are guaranteed a certain benefit level. This means that the state government bears all of the risk from the investments of these pension funds. Excess returns from these funds are unexpected exogenous windfalls to state spending budgets.
With this instrument, Daniel estimates that an additional dollar of state government spending increases income in the state by $2.12. That is a large multiplier.
The result must be interpreted with some caveats in mind. First, state spending increases act differently than increases at the national level where general equilibrium effects on prices and interest rates would be larger. Second, these spending increases are funded by windfall returns. The effects are likely to be different than spending increases funded by borrowing which crowds out private investment.
Celebrity-Yogis like Bikram Choudry claim copyright of certain sequences of Yoga postures. That’s kinda like Wilt Chamberlain patenting his favorite, ehem, postures; but while the Kama Sutra has yet to establish priority, India’s Traditional Knowledge Library is placing all known yoga asanas into the public domain.
Nine well known yoga institutions in India have helped with the documentation. “The data will be up online in the next two months. In the first phase, we have videographed 250 ‘asanas’ — the most popular ones. Chances of misappropriation with them are higher. So if somebody wants to teach yoga, he does not have to fight copyright issues. He can just refer to the TKDL. At present, anybody teaching Hot Yoga’s 26 postures has to pay Choudhury franchisee fee because he holds copyright on them,” Dr Gupta added.
Fez flip: BoingBoing
This week I switched to models of conflict where each player puts positive probability on his opponent being a dominant strategy type who is hawkish/aggressive in all circumstances. This possibility increases the incentive of a player to be aggressive if actions are strategic complements and decreases it if actions are strategic substitutes. The idea that fear of an opponent’s motives might drive an otherwise dovish player into aggression comes up in Thucydides (“The growth of Athenian power and the fear this caused in Sparta, made war inevitable.”) and also Hobbes. But both sides might be afraid and this simply escalates the fear logic further. This was most crisply stated by Schelling in his work on the reciprocal fear of surprise attack (“[I]f I go downstairs to investigate a noise at night, with a gun in my hand, and find myself face to face with a burglar who has a gun in his hand, there is a danger of an outcome that neither of us desires. Even if he prefers to leave quietly, and I wish him to, there is a danger that he may think I want to shoot, and shoot first. Worse, there is danger that he may think that I think he wants to shoot. Or he may think that I think he thinks I want to shoot. And so on.”). Similar ideas also crop up in the work of political scientist Robert Jervis.
Two sided incomplete information can generate this kind of effect. It arises in global games and can imply there is a unique equilibrium while there are multiple equilibria in the underlying complete information game. But the theory of global games relies on players’ information being highly correlated. Schelling’s logic does not seem to rely on correlation and we can imagine conflict scenarios where types/information are independent and yet this phenomenon still arises. In this lecture, I use joint work with Tomas Sjöström to identify a common logic for uniqueness that is at work for information structures with positively correlated types or independent types. Our sufficient conditions for uniqueness can be related to conditions that imply uniqueness in models of Bertrand and Cournot competition.
With these models in hand, we have some way of operationalizing Hobbes’ second motive for war, fear. I will use these results and models in future classes when I use them as building blocks to study other issues. Here are the slides.

Tennis commentators will typically say about a tall player like John Isner or Marin Cilic that their height is a disadvantage because it makes them slow around the court. Tall players don’t move as well and they are not as speedy.
On the other hand, every year in my daughter’s soccer league the fastest and most skilled player is also among the tallest. And most NBA players of Isner’s height have no trouble keeping up with the rest of the league. Indeed many are faster and more agile than Isner. LeBron James is 6’8″.
It is not true that being tall makes you slow. Agility scales just fine with height and it’s a reasonable assumption that agility and height are independently distributed in the population. Nevertheless it is true in practice that all of the tallest tennis players on the tour are slower around the court.
But all of these facts are easily reconcilable. In the tennis production function, speed and height are substitutes. If you are tall you have an advantage in serving and this can compensate for lower than average speed if you are unlucky enough to have gotten a bad draw on that dimension. So if we rank players in terms of some overall measure of effectiveness and plot the (height, speed) combinations that produce a fixed level of effectiveness, those indifference curves slope downward.
When you are selecting the best players from a large population, the top players will be clustered around the indifference curve corresponding to “ridiculously good.” And so when you plot the (height, speed) bundles they represent, you will have something resembling a downward sloping curve. The taller ones will be slower than the average ridiculously good tennis player.
On the other hand, when you are drawing from the pool of Greater Winnetka Second Graders with the only screening being “do their parent cherish the hour per week of peace and quiet at home while some other parent chases them around?” you will plot an amorphous cloud. The best player will be the one farthest to the northeast, i.e. tallest and fastest.
Finally, when the sport in question is one in which you are utterly ineffective unless you are within 6 inches of the statistical upper bound in height, then a) within that range height differences matter much less in terms of effectiveness so that height is less a substitute for speed at the margin and b) the height distribution is so compressed that tradeoffs (which surely are there) are less stark. Mugsy Bogues notwithstanding.

From an article in the Boston Globe:
He’s a sought-after source for journalists, a guest on talk shows, and has even acquired a nickname, Dr. Doom. With the effects of the Great Recession still being keenly felt, Roubini is everywhere.
But here’s another thing about him: For a prophet, he’s wrong an awful lot of the time. In October 2008, he predicted that hundreds of hedge funds were on the verge of failure and that the government would have to close the markets for a week or two in the coming days to cope with the shock. That didn’t happen. In January 2009, he predicted that oil prices would stay below $40 for all of 2009, arguing that car companies should rev up production of gas-guzzling SUVs. By the end of the year, oil was a hair under $80, Hummer was on its way out, and automakers were tripping over themselves to develop electric cars. In March 2009, he predicted the S&P 500 would fall below 600 that year. It closed at over 1,115, up 23.5 percent year over year, the biggest single year gain since 2003.
He’s not such an outlier:
To find the answer, Denrell and Fang took predictions from July 2002 to July 2005, and calculated which economists had the best record of correctly predicting “extreme” outcomes, defined for the study as either 20 percent higher or 20 percent lower than the average prediction. They compared those to figures on the economists’ overall accuracy. What they found was striking. Economists who had a better record at calling extreme events had a worse record in general. “The analyst with the largest number as well as the highest proportion of accurate and extreme forecasts,” they wrote, “had, by far, the worst forecasting record.”
But it’s not a bad gig:
(Pictures are worth some number of words you know.)
If markets are efficient then price movements happen because of news that changes the value of assets. Often however stock prices seem to fluctuate even in the absence of any new information. On Black Monday in 1987, US stocks dropped by more than 20% without any obvious reason in terms of information about fundamentals. Are asset price movements simply random, or worse, a result of manipulation?
The question is impossible to answer using data from today’s markets. How can you independently identify what is news? And even when there is verifiably no news how can you rule out the possibility that prices moved precisely because of the absence of some expected good or bad news?
Peter Koudijs has found a wonderful historical episode in which it is possible to identify precisely the days in which news arrives and to measure the effect of news on stock prices. During the 18th century there were a few English companies whose stock was traded both in London and in exchanges in Amsterdam. When the prices of these stocks changed in London, information about the price movements reached Amsterdam via mail boats that crossed the North Sea. When weather prevented these boats from crossing, news was delayed. These weather events enable him to show that a large component of price volatility is directly attributable to the arrival of news. This dramatic picture pretty much summarizes it:
On the 19th of November, shares of the British East India Company (EIC) began to drop in response to a speech given by British Prime Minister Fox who spoke of “the deplorable state” of its fiances. Bad weather delayed the arrival of boats from England to Amsterdam until around the 27th of November. In the intervening period, there was little change in the price of EIC shares on the Amsterdam exchange. But as soon as the boats arrive, the stock price dropped to the level seen on the London exchange a week earlier.
Here is a paper by Hugo Mialon which examines titles of economics papers and how they correlate with publication success and citations.
This paper examines the impact of titles and other characteristics of published economics articles on the ultimate success of these articles, as measured by their cumulative citations over the six year period following their publication. Interestingly, poetic titles are pivotal to the productivity of published empirical papers, but detri- mental to that of theoretical papers. Another finding of general interest is that the reputation of authors and their institutions counts much more toward the success of empirical papers than toward that of theoretical papers.
Also, here is Hugo’s paper on Torture, whose title wisely eschews poetry.
Chinese motorists stuck in traffic can now hire someone to sit in jams for them, with entrepreneurs finding a business opportunity in the growing gridlock across the car-crazy country.
As wider car ownership leaves roads more congested in the country of 1.3 billion, motorists can now escape by calling a substitute driver to take their cars to their destinations, China Daily reported Saturday.
As drivers are whisked away on the back of motorcycles, a car service employee sits in traffic for them, the state-run newspaper said.
The service is for “those with urgent dates or business meetings to go to, and those who have flights to catch and can’t afford to wait in a traffic jam for too long,” Huang Xizhong, whose company offers the service in the central city of Wuhan, was quoted as saying.
Before there was PPACA (“ObamaCare”) there was already government-provided health insurance. It’s called bankruptcy. Neale Mahoney studies the extent to which it crowds out demand for conventional health insurance.
Hospitals are required to provide emergency care and typically provide other health-care services without upfront payment. Patients who experience large unexpected health care costs have the option of avoiding some of these costs by declaring bankruptcy. Thus bankruptcy is essentially a form of high-deductible insurance where the deductible is the value of assets seizable by creditors. For many of the poorest households, “bankruptcy insurance” is an attractive substitute for health insurance.
Because states differ in the level of exempted assets, the effective deductible of bankruptcy insurance varies across states. This variation enables Mahoney to measure the extent to which changes in bankruptcy asset exemption affect households’ incentives to purchase conventional health insurance. Ideally you would like to compare two identical households, one in Deleware (friendly to creditors) and one in Rhode Island (friendly to debtors) and see how the difference in seizable assets affects their health insurance coverage. Things are never so simple so he uses some statistical finesses to deal with a variety of confounds.
The results allow him to address a number of natural questions. If every state were to adopt Delaware’s (restrictive) bankruptcy regulations, 16.3% of uninsured households would purchase insurance. For a program involving government subsidies to achieve the same increase, 44% of health insurance premiums would have to be subsidized.
From a welfare point of view, bankruptcy insurance is inefficient because the uninsured do not internalize some of the costs they impose from using bankruptcy insurance. On the other hand, because they are uninsured their providers directly bear more of the costs of care, mitigating the moral hazard inefficiency of standard insurance. With this tradeoff in mind we can ask what penalty should be imposed on those who choose not acquire private health insurance. Mahoney finds that the PPACA penalty is two large by almost a factor of two.
(Regular readers of this blog will know that I consider that a good thing.)
It is rare that I even understand a seminar in econometric theory let alone come away being able to explain it in words but this one was exceptionally clear.
A perennial applied topic is to try to measure the returns to education. If someone attends an extra year of school how does that affect, say, their lifetime earnings? Absent a controlled experiment, the question is plagued with identification problems. You can’t just measure the earnings of someone with N years of education and compare that with the earnings of someone with N-1 years because those people will be different along other, unobservable, dimensions. For example, if intrinsically smarter students go to school longer and earn more, then that difference will be at least partially attributable to intrinsic smartness, independent of the extra year of school.
Even a controlled experiment has confounding factors. Say you divide the population randomly into two groups and lower the cost of schooling for one group. Then you see the difference in education levels and lifetime earnings among these groups. These data are hard to interpret because different people in the treated group will respond differently to the cost reduction, probably again depending on their unobserved characteristics. Those who chose to get an extra year of education are not a random sample from the treated group.
Torgovitzky shows that under a natural assumption you can nevertheless identify the returns to additional schooling for students of all possible innate ability levels, even if those are unobservable. The assumption is that the ranking of students by educational attainment is unaffected by the treatment. That is, if students of ability level A get more education than students of ability level B when education is costly, they would also get more education than B when education is less costly. (Of course their absolute level of education will be affected.)
The logic is surprisingly simple. Under this assumption, when you look at students in the Qth percentile of education attainment in the treated and control groups, you know they have the same distribution of unobserved ability. So whatever their difference in earnings is fully explained by their difference in education attainment. (Remember that the Qth percentile measures the relative position in the distributions. The Qth percentile of the treated groups education distribution is a higher raw number of years of schooling.)
Not only that, but after some magic (see figure 1 in the paper), the entire function mapping (quantiles of) ability level and education to earnings can be identified from data.
In a paper published in the Journal of Quantitative Analysis in Sports; Larsen, Price, and Wolfers demonstrate a profitable betting strategy based on the slight statistical advantage of teams whose racial composition matches that of the referees.
We find that in games where the majority of the officials are white, betting on the team expected to have more minutes played by white players always leads to more than a 50% chance of beating the spread. The probability of beating the spread increases as the racial gap between the two teams widens such that, in games with three white referees, a team whose fraction of minutes played by white players is more than 30 percentage points greater than their opponent will beat the spread 57% of the time.
The methodology of the paper leaves some lingering doubt however because the analysis is retrospective and only some of the tested strategies wind up being profitable. A more convincing way to do a study like this is to first make a public announcement that you are doing a test and, using the method discussed in the comments here, secretly document what the test is. Then implement the betting strategy and announce the results, revealing the secret announcement.
Navy Captain Owen Honors was relieved of his command of The USS Enterprise. This is the guy behind the viral videos that made the news this week.
I want to blog about the news coverage of the firing. For example, this Yahoo! News article has the headline “Navy Firing Over Videos Raises Questions Of Timing.” Here is the opening paragraph:
The Navy brusquely fired the captain of the USSEnterprise on Tuesday, more than three years after he made lewd videos to boost morale for his crew, timing that put the military under pressure to explain why it acted only after the videos became public.
Two observations:
- Sadly, it does make perfect sense to respond to his firing now by complaining that he wasn’t fired earlier. (And to complain less if he wasn’t fired at all.) The firing now reveals that his behavior crosses some line that the Navy has private information about. Now that we know he crossed that line we have good reason to ask why he wasn’t punished earlier.
- Obviously that fact implies that it is especially difficult for the Navy to fire him now, even if they think he deserves to be fired.
The more general lesson is that there is tragically too little reward for changing your mind due to social forces that are perfectly rational and robust. The argument that a mind-changer is someone who recognizes his own mistakes and is mature enough to reverse course cannot win over the label that he is a “waffler” or other pejorative. And the force is especially strong when it comes to picking a leader.

Of this (via MR):
It’s an auction conducted at the airport terminal. In this auction you are a seller and you are bidding to sell your ticket back to the airline.
Optimists look at this and contemplate the efficiency gains: this is a mechanism for appropriately allocating scarce space on the plane. Pessimists detect a nasty incentive: now that the lowest bidder can be bought off the plane the airline has a stronger incentive to overbook.
The pessimists are right precisely because the optimists are right too.
Consider standard airline pricing with no overbooking. You buy a ticket in advance for a flight next month. Lots of uncertain details are resolved between now and then which determine your actual willingness to pay to fly on the departure date. One month in advance you can only form an expectation of this and that expected value is your willingness to pay for a seat in advance.
This is inefficient. Because, after the realization of uncertainty it could be that your value for flying is lower than somebody else who didn’t buy a ticket. Efficiency dictates that you should sell your ticket to him on the day of the flight.
One way to implement this is to hold an auction on the day of departure. Put aside the issue that flyers want advance booking for planning reasons. Even without that incentive, just-in-time auctions solve the inefficiency problem with conventional pricing but airlines would never use them.
The reason is that an auction leaves bidders with consumer surplus (or in the parlance of information economics, information rents.) As a simple example, suppose there is a single seat avaiable on the flight and two bidders are bidding for it. An optimal auction is (revenue-equivalent to) a second-price auction so that the winning bidder’s price is equal to the willingness to pay of the second-highest bidder. That is lower than the winner’s willingness to pay and the difference is his consumer’s surplus.
The airline would like to achieve the efficient allocation without leaving you this consumer’s surplus. That is impossible in a spot-auction because the airline can never know exactly how much you are willing to pay and charge you that.
But a hybrid pricing mechanism can implement the efficient allocation and capture all the surplus it generates. And this hybrid pricing mechanism entails overbooking followed by a departure-day auction to sell back excess tickets.
The basic idea is standard information economics. The reason you get your information rents in the spot auction is that you have an informational advantage: only you know your realized willingness to pay. To remove that informational advantage the airline can charge you an entrance fee to participate in the auction before your willingness to pay is realized, i.e. a month in advance as in conventional pricing.
Here is how the scheme works in the simple example. There is one seat available. Instead of selling that single seat to a single passenger, the airline sells two tickets. Then, on the day of departure an auction is held to sell back one ticket to the airline. The person who “wins” this auction and makes the sale will be the person with the lowest realized value for flying. The other person keeps their ticket and flies. On auction day, the winner gets some surplus: the price he will receive is the willingness to pay of the other guy which is by definition higher than his own. (Delta is apparently using a first-price auction, but by revenue equivalence the surplus is the same.)
But in order to get the opportunity to compete in this auction you have to buy a ticket a month in advance. And at that time you don’t know whether you are going to win the auction or fly. The best you can do is calculate your expected surplus from participating in that auction and you are willing to pay the airline that much to buy a ticket. Your ticket is really your entrance pass to the auction. And the price of that ticket will be set to extract all of your expected surplus.
Note that the only way that the airline can achieve these efficiency gains and the accompanying increase in profits is by overbooking at the stage of ticketing. So the pessimists are right.
(You can write down a literal model of all of the above. The conclusion that all of your surplus is extracted would follow if travelers were ex ante symmetric: they all have the same expected willingness to pay at the time of ticketing. But the general conclusion doesn’t require this: all of the efficiency gains from adding a departure-day sellback auction will be expropriated by the airline. That follows from a beautiful paper by Eso and Szentes. To the extent that fliers retain some consumer surplus it is due to ex ante differences in expected willingness to pay. The two fliers with the highest expected surplus will buy tickets at a price equal to the third-highest expected surplus. This consumer surplus is already present in conventional pricing.)
You did take my advice didn’t you? If you did, then because of the January effect, you bought the S&P500 at 1180.55 on November 30 and sold it on the first trading day of the new year, yesterday, at a price of 1271.87 and made a 7.5% return in a single month.






