You are currently browsing the tag archive for the ‘economics’ tag.

Tyler (you can call him T, you can call him C, you can call him TC, you can call him Professor TC, you can call him Dr. Ty, you can call him Ty Cow, you can call him Tyce, you can call him T-Dice, you can call him Dr. T Dice Disco Dorang…) asks how California might redesign its constitution.
The underlying problem here is that California is simply a beautiful place to live. It’s not just the climate, or the people, or the geography. It’s that something floating around in the air that just makes you happy all the time you are there. And then the second problem is that there is free entry.
So it really doesn’t matter what you do with the constitution. You can fix the referendum system, you could change the budget process, you could turn the government into Singapore. But that only means that something else has to get hosed to bring the quality of life again back down to the level that maintains the zero-rent equilibrium condition with free entry.
Given that the question boils down to which part of California do you want to screw up in order to achieve that? This is mostly a distributional question. Bad state government saps rents in one way. Give those back and bad local governments will do just fine to take up the slack.
Of course all that is really required for equilibrium is that the quality of life of the marginal resident (or resident-to-be) is sufficiently low. This is completely consistent with high average quality of life but its not clear to me why a well-functioning government would be better at achieving such a distribution than the one they’ve got now. That is, who but the marginal resident is more affected by high taxes and dysfunctional government?
(The cheapest way to target the marginal resident is to make it infinitely costly to enter. But that gives huge rents to those lucky enough to live there already and the temptation to take those away would be too great for any government.)
She, like many artists, doesn’t want to raise the price of her concert tickets even though there is excess demand. By keeping the price low she allows fans who could not afford the market clearing price to see her concerts. She is effectively paying to allow them to enjoy her shows. Does this make her an altruist?
A textbook argument against, but one that is wrong, is the following. At the low price there is a market for ticket scalpers. Ticket scalpers will raise the price to the market-clearing level. Those fans who would sell their tickets to scalpers reveal that they prefer the money to the tickets. And they get the money in exchange for the tickets. Likewise those that buy tickets from scalpers reveal that they value the tickets more than the money. So the secondary market makes everyone better off. So if Miley Cyrus were truly an altruist she would allow this to happen rather than paying a price to prevent it.
The problem with the argument is that it works only because the ticket scalper was unanticipated. If all parties knew that tickets would sell at the market clearing price then the “true fans” that Miley is targeting would never actually get a ticket in the first place and this would make them worse off. They would never get a ticket either because they couldn’t afford it, or if they were originally allocated by lottery, the additional rents would attract more entrants to that lottery.
So we can’t argue that Miley is not an altruist. But we can argue that Miley’s refusal to raise prices is perfectly consistent with profit maximization. Here is a model. A fan’s willingness to pay to see Miley Cyrus in concert is a function of who else is there. It’s more fun if she is singing to screaming pre-teen girls because they add to the experience. It’s no fun if she is singing to a bunch of rich parents and their kids who don’t know how to cut loose.
With this model, no matter how much Miley would like to raise the price to take advantage of excess demand, she cannot. Because the price acts as a screening instrument. Higher prices select a less-desirable composition of the audience, lowering willingness to pay. The profit maximizing price is the maximum she can charge before this selection effect starts to reduce demand. At that price and everywhere below there is excess demand.
This is related to a paper by Simon Board on monopolistic pricing with peer effects.
This weekend we attended a charity auction for my kids’ pre-school. What does a game theorist think about at a charity auction?
- There is a “silent auction” (sealed bid), followed by a live auction (open outcry). How do you decide which items to put in the live auction?
- The silent auction is anonymous, so items with high signaling value should be moved to the live auction. A 1 week vacation in Colorado sold for less than $1000 (who would want to signal that they don’t already have their own summer home?) wheras a day of working as an assistant at Charlie Trotter’s sold for $2500.
- There is a raffle. You sell those tickets at the door when people are distracted and haven’t started counting how much they have spent yet. But what price do you set?
- The economics of the charity auction are such that vendors with high P-MC markups can donate a high value item (high P) for a low cost (low MC). This explains why the items usually have a boutique quality to them.
- In the silent auction, you write down your bids with a supplied pen on the bid sheet. Sniping is pervasive. Note for next year: bring a cigarette lighter. You make your last minute bids and then melt the end of the pen just enough to stop the ink from flowing.
- When you are in suburban Winnetka on Chicago’s North Shore, for which kind of item is the winner’s curse the strongest: art or sports tickets/memorabilia?
- One of the live auction side-events is a pure signaling game where you are asked to give an amount of money to a special fund. They start with a very high request and after everyone who is willing to give that much has raised their hand, they continually lower the request. I think this is the right timing. With the ascending version the really big donors will give too early.
- How do you respond when asked to pay to enter a game with the rules to be announced later? Answer: treat it like a raffle. Surprise answer: A chicken will be placed in a cage. The winner of the game is the player whose number the chicken poops on.
That didn’t turn out to be such a good idea. Someone forgot to put a lid on the cage and the chicken, well-versed in the hold-up problem, found a way to use his monopoly power:
That is an actual-use, signed and engraved hockey stick from Patrick Kane of the Chicago Blackhawks. It subsequently sold for over $1000. The chicken was unharmed and eventually spent the evening perched on a rafter high above the proceedings threatening to select a winner directly.
When you are competing to be the dominant platform, compatibility is an important strategic variable. Generally if you are the upstart you want your platform to be compatible with the established one. This lowers users’ costs of trying yours out. Then of course when you become established, you want to keep your platform incompatible with any upstart.
Apple made a bold move last week in its bid to solidify the iPhone/iPad as the platform for mobile applications. Apple sneaked into its iPhone OS Developer’s agreement a new rule which will keep any apps out of its App Store that were developed using cross-platform tools. That is, if you write an application in Adobe’s Flash (the dominant web-based application platform) and produce an iPhone version of that app using Adobe’s portability tools, the iPhone platform is closed to you. Instead you must develop your app natively using Apple’s software development tools. This self-imposed-incompatibility shows that Apple believes that the iPhone will be the dominant platform and developers will prefer to invest in specializing in the iPhone rather than be left out in the cold.
Many commentators, while observing its double-edged nature, nevertheless conclude that on net this will be good for end users. Jon Gruber writes
Cross-platform software toolkits have never — ever — produced top-notch native apps for Apple platforms…
[P]erhaps iPhone users will be missing out on good apps that would have been released if not for this rule, but won’t now. I don’t think iPhone OS users are going to miss the sort of apps these cross-platform toolkits produce, though. My opinion is that iPhone users will be well-served by this rule. The App Store is not lacking for quantity of titles.
And Steve Jobs concurs.
We’ve been there before, and intermediate layers between the platform and the developer ultimately produces sub-standard apps and hinders the progress of the platform.
Think about it this way. Suppose you are writing an app for your own use and, all things considered, you find it most convenient to write in a portable framework and export a version for your iPhone. That option has just been taken away from you. (By the way, this thought experiment is not so hypothetical. Did you know that you must ask Apple for permission to distribute to yourself software that you wrote?) You will respond in one of two ways. Either you will incur the additional cost and write it using native Apple tools, or you will just give up.
There is no doubt that you will be happier ex post with the final product if you choose the former. But you could have done that voluntarily before and so you are certainly worse off on net. Now the “market” as a whole is just you divided into your two separate parts, developer and user. Ex post all parties will be happy with the apps they get, but this gain is necessarily outweighed by the loss from the apps they don’t get.
Is there any good argument why this should not be considered anti-competitive?
My memory is not so good but it seems to me that professional golfers didn’t used to look so much like race cars.

Perhaps they have been consulting with auction theorists. Selling ad space on your shirt is like a multi-unit auction but with an interesting twist. Like any auction you want to insist on a reserve price to keep revenues high. The reserve acts as a threat not to sell unless bids are high enough and this induces more agressive bidding. Normally this leads to under-supply, just as a textbook monopolist restricts output to keep prices high.
But here’s the twist. After you have sold the ad on your hat, your auction for an ad on your lapel is a threat against the advertiser on your hat. If you sell an ad on your lapel it’s going to take some focus off the hat.
That means it is in both yours and the hat-advertiser’s interest to have him bid for the lapel ad. Yours because more competition is better, and his because he wants to keep the competitors off your lapel. Now think about how your reserve price for the lapel-auction works. Just as before, for the new bidders it is an inducement to bid higher. But for the hat-guy it’s an inducement to lower his bid for your lapel. If you set a high reserve then he can safely lose the auction for your lapel and expect that nobody else will win, which for him is just as good as winning.
This leads you to set a lower reserve on your lapel than you otherwise would. In effect this is a threat to the hat-hawker that if he doesn’t bid high enough to keep your lapel clean, you are going to put someone else’s logo there. That is, you are over-supplying ads (relative to the situation in which the ads had no spillovers.)
When these principles are put to use, two kinds of outcomes can occur. If there is a high enough bidder you will sell exclusive advertising to that bidder. If not, you will sell lots of little ads to little bidders.

While we are on the subject, here are recent prices for apparel real-estate.
If I have a jug of milk that is close to its expiration date and another, newer and unopened, jug of milk I will use up the old milk before opening the new one.
But if I have a batch of coffee that was roasted 2 weeks ago and a new, fresher batch comes in, I will open the new batch and save the old batch to be used up after the newer one is done.
The difference derives from shape of their expiration curves. Milk stays relatively fresh for a while and then rapidly deteriorates. It’s freshness curve is concave. Coffee quality deteriorates quickly after roasting and then stays relatively constant after that. 2 month old coffee is just as agreeable as 5 days old but both are much worse than 1 day old. Coffee’s freshness curve is convex.
The shape of the expiration curve determines whether you like or dislike mean-preserving spreads in the age profile of your stash. Convexity means you would choose 1/2-new 1/2-old over all-medium. Concavity means you have the opposite preference.
What are the expiration curves of other things?
Convex: eggs, bananas, significant others (except mine of course, she gets fresher with age.)
Concave: vegetables, bread, co-authors, this blog post
President Obama has used the Congressional recess to appoint Paul Krugman as Vice Chairman of the Federal Reserve system.
Obama spent the first year in office wooing centrists like Olympia Snowe. That strategy slowed down his reform agenda and did not pay off. The President had to rely on old hardball Chicago politics to pass healthcare reform. He has realized his hope of appealing to the center of the political spectrum is futile. And in any case, it’s the diehard party faithful that decide midterm elections. What better way to energize the base than by appointing their hero, the self-styled conscience of liberalism and economics Nobel Prize winner, Paul Krugman, to the Federal Reserve?
Krugman stands no chance of getting the 60 votes required to survive the usual Senate confirmation process. As his appointment has no direct impact on the budget, the arcane procedure known as “reconciliation”, that requires only a simple majority, cannot be used to give him an up and down confirmation vote. Ironically, Krugman will have a huge impact on the budget as he favors expansionary monetary and fiscal policy in recessions. A perpetually gloomy forecaster, Krugman almost always believes a recession is round the corner and for all practical purposes favors large budget deficits all the time. Even if reconciliation could be used, with moderate Democrats against him, it is not clear that Krugman could draw 50 votes. So, a recess appointment was the only possible strategy for Obama.
This is obviously a dangerous move for the President. He is used to hiding his liberal agenda behind the fig-leaf of bipartisanship. With the leaf removed, he feels naked and vulnerable. Obama has gambled that the extreme left must be brought out to retain the Democrats’ hold on Congress. With the Krugman appointment as a flashpoint, Obama risks losing moderates and perversely provoking the extreme right to turn out and vote.
The benefits and risks for Obama are clear but what’s in it for Krugman? He has long wanted to get his hands on the levers of economic policy. But at what cost? He will have to step down from his sinecure as a Times’ columnist. He will have to mothball his textbook, as Ben Bernanke did before him. Most of all, he may regret the demise of the speaking engagements that have helped to bankroll his many houses and apartments in America and beyond. A favorite of the Hollywood glitterati – Ben Affleck is a close friend – Krugman will now have to give up the organic-chicken-and-chardonnay circuit and attend regular Fed meetings in Washington D.C. A dream for a regular economist but perhaps a letdown for a media star like Krugman. Of course as a recess appointee, Krugman can only serve until the next Congress is seated – maybe that is just the right amount of time for him to substitute Ben Bernanke for Ben Affleck in his speed dial.
All in all, an intriguing appointment for all parties concerned.
What would happen if the individual mandate were removed from the health care bill? Republicans are proposing to do that but leave intact the rules on pre-existing conditions. This sounds like disaster because then the equilibrium is for only the already-sick to have “insurance,” meaning premiums are very high, meaning that the healthy prefer not to buy insurance until they are already sick.
This is not a problem of “skyrocketing costs” as some characterize it. If the same number of people get sick, then costs are the same. Its the premiums that skyrocket. The problem with that is that health care insurance is no longer insurance.
But the individual mandate is not the only way to bring the insurance back into health insurance. (And it appears that the penalties are so low that we are headed for this equilbirium anyway. See this article on MR. ) Many employer-based health insurance providers use a system of “open enrollment.” You can sign on to the plan when you join, but if you don’t and then decide later you want to , you must wait until a specific narrow window of time.
I don’t know what the intended purpose of open enrollment is but one effect it has is to give incentives to buy insurance before you get sick. A system like this would work just fine in place of the individual mandate.
Even better: when you turn 21 you are able to buy insurance from any provider regardless of your pre-existing conditions. This right continues as long as you have had insurance continuously. If you chose not to buy insurance in the past (and you could have afforded it) and you wish to buy it now then you cannot be denied coverage due to pre-existing condition. However, insurance companies are not required to offer you the same policy as the main pool.
Update: Austin Frakt argues that the penalties are already high enough to avoid the bad equilibrium.
If you are one of the millions of Facebook users who play games like Playfish or Pet Society, you are a datum in Kristian Segerstale’s behavioral economics experiments.
Instead of dealing only with historical data, in virtual worlds “you have the power to experiment in real time,” Segerstrale says. What happens to demand if you add a 5 percent tax to a product? What if you apply a 5 percent tax to one half of a group and a 7 percent tax to the other half? “You can conduct any experiment you want,” he says. “You might discover that women over 35 have a higher tolerance to a tax than males aged 15 to 20—stuff that’s just not possible to discover in the real world.”
Note that these are virtual goods that are sold through the game for (literal) money. And here is the website of the Virtual Economy Research Network which promotes academic research on virtual economies.

Because he is 17 year old Russian high-school student Andrey Ternovskiy. He’s the guy who created Chatroulette by himself, on a whim, in 3 months, and is now in the US fielding offers, meeting with investors, and considering never again returning to Moscow.
Should he sell? Would he sell? To frame these questions it is good to start by taking stock of the assets. He has his skills as a programmer, the codebase he has developed so far, and the domain name Chatroulette.com which is presently a meeting place for 30 million users with an additional 1 million new users per day. His skills, however formidable, are perfectly substitutable; and the codebase is trivially reproduceable. We can therefore consider the firm to be essentially equal to its unique exclusive asset: the domain name.
Who should own this asset? Who can make the most out of it? In a perfect world these would be distinct questions. Certainly there is some agent, call him G, which could do more with Chatroulete.com than Andrey, but in a perfect world, Andrey keeps ownership of the firm and just hires that person and his competitive wage.
But among the world’s many imperfections, the one that gets in the way here is the imperfection of contracting. How does Andrey specify G’s compensation? Since only G knows the best way to build on the asset, Andrey can’t simply write down a job description and pay G a wage. He’d have to ask G what that job description should be. And that means that a fixed wage won’t do. The only way to get G to do that special thing that will make Chatroulette the best it can be is to give G a share of the profits.
If Andrey is going to share ownership with G, who should have the largest stake? Whoever has a controlling stake in the firm will be the other’s employer. So, should G employ Andrey (as the chief programmer) or the other way around? Andrey’s job description is simple to write down in a contract. Whatever G says Chatroullete should do, Andrey programs that. Unlike when Andrey employs G, G doesn’t have to know how to program, he just has to know what the final product should do. And if Andrey can’t do it, G can just fire him and find someone who can.
So Andrey doesn’t need any stake in the profits to be incentivized to do his job, but G does. So G should own the firm completely and Andrey should be its employee. The asset is worth more with this ownership structure in place, so Andrey will be able to sell for a higher price than he could expect to earn if he were to keep it.
Now that Roger Myerson is one. Today at Northwestern he presented his new work on A Moral Hazard Model of Credit Cycles. It attracted a huge crowd, not surprisingly, and introduced a whole new class of economists to the joy and sweat of a Roger Myerson lecture.
(Roger apparently hasn’t read my advice for giving talks.) Listening to Roger speak is not only thoroughly enlightening and entertaining, its calisthenics for the mind. I once brought a pen and pad to one of his talks and outlined his nested digressions. It is absolutely a thing of beauty when every step down the indentation ladder is paired with a matching step on the way back up. When he finally returns to the original stepping off point, no threads are left hanging.
Keeping track of all this in your head and still following the thread of the talk is a bit like Lucy and Ethel wrapping candy.
Still, I think I got the basic point. Roger has a model of credit cycles that falls out naturally from a well-known feature of dynamic moral hazard. In his model, banks are intermediaries between investors and entreprenuers and they are incentivized via huge bonuses to invest efficiently. These bonuses are paid only when the bankers retire with a record of success.
These backloaded incentives mean that bankers are trusted with bigger funds the closer they are to retirement. That’s when the coming payout looms largest, deterring bankers from diverting the larger sums for their own benefit. Credit cycles are an immediate result. Because bankers handle larger sums near their retirement than those just starting out, their retirement means that total investment must go down. So the business cycle tracks the age demographics of the banking sector.
(It’s the Cocoon theory of business cycles, because if you could extend the lives of bankers you would enhance the power of incentives, lowering the moral hazard rents and increasing investment.)

In a classic experiment, psychologists Arkes and Blumer, randomized theater ticket prices to test for the existence of a sunk-cost fallacy. Patrons who bought season tickets at the theater box office were randomly given discounts, some large some small. At the end of the season the researchers counted how often the different groups actually used their tickets. Consistent with a sunk-cost fallacy, those who paid the higher price were more likely to use the tickets.
A problem with that experiment is that it was potentially confounded with selection effects. Patrons with higher values would be more likely to purchase when the discount was small and they would also be more likely to attend the plays. Now a new paper by Ashraf, Berry, and Shapiro uses an additional control to separate out these two effects.
Households in Zambia were offered a water disinfectant at a randomly determined price. If the price was accepted, then the experimenters randomly offered an additional discount. With these two treatment dimensions it is possible to determine which of the two prices affects subsequent use of the product. They find that all of the variation in usage is explained by the initial offer price. That is, the subjects revealed willingness to pay was the only detrminant of usage and not the actual payment.
This is the cleanest field experiment to date on the effect of past sunk costs on later valuations and it overturns a widely cited finding. On the other hand, Sandeep and I have a lab experiment which tests for sunk cost effects on the willingness to incur subsequent, unexpected, cost increases. We show evidence of mental accounting: subjects act as if all costs, even those that are sunk, are relevant at each decision-making stage. This is the opposite effect found by Arkes and Blumer.
(Dunce cap doff: Scott Ogawa)
From a worthwhile article in the NY Times surveying a number of facts about e-book and tree-book sales:
Another reason publishers want to avoid lower e-book prices is that print booksellers like Barnes & Noble, Borders and independents across the country would be unable to compete. As more consumers buy electronic readers and become comfortable with reading digitally, if the e-books are priced much lower than the print editions, no one but the aficionados and collectors will want to buy paper books.
Which, translated, reads: publishers don’t want low e-book prices because then people would buy them. Note that according to the article, profit margins are larger for e-books than for pulp. (Confused? Marginal revenue accounts for cross-platform cannibalization, and is still set equal to marginal cost.)
Greg Mankiw often says this:
A tax on height follows inexorably from the standard utilitarian approach to the optimal design of tax policy coupled with a well-established empirical regularity.
Becuase this is part of his argument against income redistribution. As I have said before (and see a nice comment there by Ilya) this is based on a misunderstanding of the theory of taxation. It does not matter what the government’s underlying objective is, whether it is utilitarian or anything else. If the government wants to raise money, for whatever purpose, say to provide education or pay the President’s economic advisors or fight wars, it wants to do so in the least distortionary way.
Minimizing the distortions means making use of instruments that are correlated with ability to pay but are exogenous, i.e. unaffected by tax policy. As Mankiw points out (the “well-established empirical regularity”), height is correlated with ability to pay and clearly the tax code does not affect how tall you are. So by conditioning your tax payments (at least partially) on your height, the government can raise the same amount of revenue as a given pure income tax with less distortionary effects on your labor supply.
It has nothing to do with utilitarianism. (And your natural objection to taxing height therefore says nothing about your attitudes toward income redistribution.)
My parents are visiting from England. My father arrived with a loose molar which had to be removed. We do not know good dentists in the area and my father does not have American dental insurance. We settled on TUFTS Medical School’s Emergency Dental Center for the extraction. Three and a half hours later we were out of there, leaving one molar and $180 dollars in our wake.
My father thinks he now needs an implant so I quickly checked out options:
1. A New York Times ad offered an implant for $1000 (not including the crown).
2. In the U.K. it costs about the same.
3. Given the high prices in the U.K. and U.S., an attractive option is to go to Hungary. A quick search suggested a price of $700 for an implant. So, you can go to Budapest for a bit of a holiday and get your grotty teeth dealt with at the same time.
Why is Hungary so cheap? One nice thing about trade in dentists services is that it largely must be one way: Hungarians aren’t going to Britain to go to dentists! This makes it different from other products like cars which countries export and import simultaneously. This is hard to explain with traditional trade theory. The traditional theory should be adequate for trade in dental services though. Ricardian trade theory suggests differences in technology drive trade patterns. Is that the explanation? Heckscher-Olin’s theory of trade instead assumes technology is the same across countries and differences in endowments drive trade patterns. Is the endowment of dentists large in Hungary? At this point, I’m stuck.
We tend to think of intellectual property law as targeted mostly at big ideas with big market value. But for every big idea there are zillions of little ideas whose value adds up to more. Little ideas are little because they are either self-contained and make marginal contributions or they are small steppingstones, to be combined with other little ideas, which eventually are worth a lot.
It’s now cheap to spread little ideas. Whereas before even very small communication costs made most of them prohibitively expensive to share. In some cases this is good, but in some cases it can be bad.
When it comes to the nuts and bolts kinds of ideas, like say how to use perl to collect data on the most popular twitter clients, ease of dissemination is good and intellectual property is bad. IP protection would mean that the suppliers of these ideas would withold lots of them in order to profit from the remainder. Without IP protection there is no economic incentive to keep them to yourself and the infinitessimal cost of sharing them is swamped by even the tiniest pride/warm glow motives.
Now the usual argument in favor of IP protection is that it provides an economic incentive for generating these ideas. But we are talking about ideas that don’t come from research in the active sense of that word. They are the byproduct of doing work. When its cheap to share these ideas, IP protection gets in the way.
The exact same argument applies to many medium-sized ideas as well. And music.
But there are ideas that are pure ideas. They have no value whatsoever except as ideas. For example, a story. Or basic research. The value of a pure idea is that it can change minds. Ideas are most effective at changing minds when they arrive with a splash and generate coordinated attention. If some semblance of the idea existed in print already, then even a very good elaboration will not make a splash. “That’s been said/done before.”
Its too easy now to spread 1/nth-baked little ideas. Before, when communication costs were high it took investment in polishing and marketing to bring the idea to light. So ideas arrived slowly enough for coordinated attention, and big enough to attract it. Now, there will soon be no new ideas.
Blogs will interfere with basic research, especially in the social sciences.
When it comes to ideas, here’s one way to think about IP and incentives to innovate. It’s true that any single individual needs extra incentive to spend his time actively trying to figure something out. That’s hard and it takes time. But, given the number of people in the world, 99.999% of the ideas that would be generated by active research would almost certainly just passively occur to at least one individual.
Or more generally, does your initial job placement matter for your long-term success? Or does “bad luck” on the job market eventually wash out? A 2006 paper from Paul Oyer looks at this question.
In this paper, I show that initial career placement matters a great deal in determining the careers of economists. Each place higher in rank of initial insti-tution causes an economist to work at an institution ranked 0.6 places higher in the time period from three to 15 years later. Also, the fact that an economist originally gets a job at a top-50 institution makes that economist 60 percent more likely to work at a top-50 school later in his or her career. While it would obviously come as no surprise to find that economists at higher-ranked schools have higher research output, I will present evidence that for a given economist—that is, holding innate ability constant— obtaining an initial placement at a higher-ranked institution leads to greater professional productivity.
He circumvents the obvious endogeneity issue: there may be some measure of your quality that can’t be observed in the data and then lower initial placement is going to be correlated with lower intrinsic quality. The way he gets around this is to compare cohorts in strong-market years with cohorts from weaker years. Suppose that the business cycle is uncorrelated with your intrinsic skill and bad times means worse than usual placement. Then the same quality worker will have worse placement in weak-market years.
In fact, Oyer finds that students who enter the market in weak years are less successful even in the long run. This is evidence that their initial placement mattered.
There remain some selection problems, however. For example, students have choice over which year to enter the market. It could be that, anticipating the worse placements, the best students enter the market a year before a downturn or wait a year after. Also, in bad years the best students might find altogether better alternatives than academia and go to the private sector.
Here’s my idea for a different instrument: couples. It often happens that a student on the market has a spouse who is seeking a job in the private sector. Finding a good job in the same city for both partners is more constraining than a solo search and typically the student will have to compromise, taking their seond- or third-best offer.
If being married at the time of entering the market is uncorrelated with your unobservable talent as an economist, then a difference in the long-run success of PhDs with dual searches would be evidence of the effect of initial placement.
(I would focus on academic-private sector couples. In an academic-academic couple, the two quite often market themselves as a bundle to the same institution and the worse of the two gets a better placement than he would if he were solo. But it would be interesting to compare academic-academics to academic-private.)
(Casquette cast: Seema Jayachandran)
Job market interviewing entails a massive duplication of effort. You interview with each of your potential employers individually imposing costs on them and on you. Even in the economics PhD job market, a famously efficient matching process, we go through a ridiculous merry-go-round of interviews over an entire weekend. Each candidate gives essentially the same 30 minute spiel to 20 different recruiting committees.
What if we assigned a single committee to interview every candidate and webcast it to any potentially interested employer? Most recruiting chairs would applaud this but candidates would hate it. Both are forgetting some basic information economics.
Candidates hate this idea because with only one interview, a bad performance would ruin their job market. With many interviews there are certain to be at least a few that go smoothly. But of course there is a flip-side to both of these. If the one interview goes very well, they will have a great outcome. With many interviews there are certain to be a few that go badly. How do these add up?
Auction theory gives a clear answer. Let’s rate the quality of an interview in terms of the wage it leads your employer to be willing to pay. Suppose there are two employers and you give them separate interviews. Competitive bidding will drive the wage up until one of them drops out. That will be the lower of the two employer’s willingness to pay.
On the other hand, if both employers saw the outcome of the same interview, then both would have the same willingness to pay equal to the quality of that one interview. On average the quality of one draw from a distribution is strictly larger than the minimum of two draws from the same distribution. You get a higher wage on average with a single interview.
What’s going on here is that the private information generated by separate interviews gives each employer some market power, so-called information rents. By pooling the information you put the employers on equal footing and they compete away all of the gains from employment, to your benefit.
In fact, pooling interviews is even better than this argument suggests due to another basic principle of information economics: the winner’s curse. When interviews are separate, each employers’ willingness to pay is based on the quality of the interview and whatever he believes was the quality of the other interview. Both interviews are informative signals of your talent. Without knowing the quality of the other interview, when bidding for your labor each employer worries that he might win the bidding only because you tanked the other interview. Since this would be bad news for the winner (hence the curse), each bidder bids conservatively in order to avoid overpaying in such a situation. Your wage suffers.
By pooling interviews you pool the information and take away any doubts of this form. Without the winner’s curse, employers can safely bid aggressively for you.
Going back to the original intuition, its true there are upsides and downsides of having separate interviews but the mechanics of competition magnify the downsides through both of these channels, so in the end separate interiews leads to a lower wage on average than if they were pooled.
Perhaps this explains why, despite their grumblings, economics department recruiters are still willing to spend 18 hours locked in a windowless hotel room conducting interviews.
Addendum: A commenter asked about competition by more than two employers. If six are bidding for you, then eventually the wage has been bid up until four have dropped out of the competition. The price at which they drop out reveals their information to the two remaining competitors. At that point the two-bidder argument applies.
Eddie Dekel points out the following puzzling fact. At the gym most people wipe down the exercise machines and benches after they use them and not before. There are a few obvious social benefits of this policy. For one, you know better than your successor where the towel is most advantageously deployed. Also, the sooner that stuff is removed, the better.
But still it’s a puzzle from the point of view of dynamic efficiency. With this system everyone mops once. But there exists a welfare improving re-allocation where one guy doesn’t mop and after him everyone mops before using the machine. Nobody’s worse off and that one guy is better off. A Pareto improvement.
In fact the ex-post-mop regime is especially unstable because that one guy has a private incentive to trigger the re-allocation. He’s the one who saves effort. So from an abstract point of view this is indeed a puzzle. Moreover, there is this Seinfeldian insight that complicates things even further.
ELAINE: Never mind that, look at the signal I just got.
GEORGE: Signal? What signal?
ELAINE: Lookit. He knew I was gonna use the machine next, he didn’t wipe his sweat off. That’s a gesture of intimacy.
GEORGE: I’ll tell you what that is – that’s a violation of club rules. Now I got him! And you’re my witness!
ELAINE: Listen, George! Listen! He knew what he was doing, this was a signal.
GEORGE: A guy leaves a puddle of sweat, that’s a signal?
ELAINE: Yeah! It’s a social thing.
GEORGE: What if he left you a used Kleenex, what’s that, a valentine?
(conversations with Asher, Ron, Juuso and Eddie. I take all the blame.)
The Econometric Society which publishes Econometrica, one of the top 4 academic journals in Economics has taken under its wing the fledgling journal Theoretical Economics and the first issue under the ES umbrella has just been published. TE has rapidly become among the top specialized journals for economic theory and it stands out in one very important respect. All of its content is and always will be freely available and publicly licensed.
Bootstrapping a reputation for a new journal in a crowded field is by itself almost impossible. TE has managed to do this without charging for access, on a minimal budget supported essentially by donations plus modest submission fees, and with the help of a top-notch board of editors who embraced our mission. There is no doubt that the community rallied around our goal of changing the world of academic publishing and it worked.
This is just a start. Already the ES is launching a new open-access field journal with an empirical orientation, Quantitative Economics. Open Access is here to stay.
T-Cow disagrees that Paul Krugman should be Fed Chairman:
Elsewhere I have to strongly differ with the Johnson-Kwak proposal that Paul Krugman be selected. I don’t intend this as a negative comment on Krugman, if anything I am suggesting he is too dedicated to reading and writing and speaking his mind. The Fed Chair has to be an expert on building consensus and at maintaining more credibility than Congress; even when the Fed screws up you can’t just dump this equilibrium in favor of Fed-bashing. What lies on the other side of that curtain isn’t pretty. Would Krugman gladly suffer the fools in Congress? Johnson and Kwak are overrating the technocratic aspects of the job (which largely fall upon the Fed staff anyway) and underrating the public relations and balance of power aspects. It’s unusual that an academic will be the best person for the job.
Even if you think that Krugman would be the best person for the job that doesn’t imply that we should give him the job. The decision is between Krugman doing what he is doing now and Bernanke as Fed chairman versus Krugman being Fed chairman, nobody doing what Krugman is doing now and Bernanke going back to teaching Princeton undergrads. To prefer the latter it is not enough that Krugman is a better Fed chair than Bernanke. His advantage there must compensate for the other changes in the bundle.
Yahoo! has been building a social science group in their research division. In addition to some well-known economists, they have also been attracting ethnographers and cognitive psychologists away from posts at research universities.
The recruitment effort reflects a growing realization at Yahoo, the second most popular U.S. online site and search engine, that computer science alone can’t answer all the questions of the modern Web business. As the novelty of the Internet gives way, Yahoo and other 21st century media businesses are discovering they must understand what motivates humans to click and stick on certain features, ads and applications – and dismiss others out of hand.
However, there are risks when a for-profit company adopts an academic approach, which calls for publishing research regardless of the outcome. Notably, one set of figures from a study conducted by Reiley, the economist from the University of Arizona, raised eyebrows at Yahoo.…it could underscore a growing immunity to display advertising among the Web-savvy younger generation.The latter possibility would do little to bolster Yahoo’s sales pitch to advertisers hoping to influence this coveted age group. But raising such questions may be the cost of recruiting researchers committed to pure science.
When you search google you are presented with two kinds of links. Most of the links come from google’s webcrawlers and they are presented in an order that reflects google’s PageRank algorithm’s assessment of their likely relevance. Then there are the sponsored links. These are highlighted at the top of the main listing and also lined up on the right side of the page.
Sponsored links are paid advertisements. They are sold using an auction that determines which advertisers will have their links displayed and in what order. While the broad rules behind this auction are public, google handicaps the auction by adjusting bids submitted by advertisers according to what google calls Quality Score. (Yahoo does something similar.)
If your experience with sponsored links is similar to mine you might start to wonder whether Quality Score actually has the effect of favoring lower quality links. Renato Gomes, in his job market paper explains why this indeed might be a feature of the optimal keyword auction.
The idea is based on the well-known principle of handicaps for weak bidders in auctions. Let’s say google is auctioning links for the keyword “books” and the bidders are Amazon.com plus a bunch of fringe sites. If Amazon is willing to bid a lot for the ad but the others are willing to bid just a little, an auction with a level playing-field would allow Amazon to win at a low price. In these cases google can raise its auction revenues by giving a handicap to the little guys. Effectively google subsidizes their bids making them stronger competitors and thereby forcing Amazon to bid higher.
Of course its rare that the stronger bidder is so easy to identify and anyway the whole auction is run instantaneously by software. So how would google implement this idea in practice? Google collects data on how often users click through the (non-sponsored) links it provides to searchers. This gives google very good information about how much each web site benefits from link-generated traffic. That’s a pretty good, albeit imperfect, measure of an advertiser’s willingness to pay for sponsored links. And that’s all google would need to distinguish the strong bidders from the weak bidders in a keyword auction.
And when you put that all together you see that the weak guys will be exactly those websites that few people click through to. The useless links. The revenue-maximizing sponsored link auction favors the useless links and as a consequence they win the auction far more frequently than they would if the playing-field were level.
(To be perfectly clear, nobody outside of google knows exactly how Quality Score is actually calculated, so nobody knows for sure if google is intentionally doing this. The analysis just shows that these handicaps are a key part of a profit-maximizing auction.)
Renato’s job market paper derives a number of other interesting properties of an optimal auction in a two-sided platform. (Web search is a two-sided platform because the two sides of the market, users and advertisers, communicate through google’s platform.) For example, his theory explains why advertisers pay to advertise but users don’t pay to search. Indeed google subsidizes users by giving them all kinds of free stuff in order to thicken the market and extract more revenues from advertisers. On the other hand, dating sites, and some job-matching sites charge both sides of the market and Renato derives the conditions that determine which of these pricing structures is optimal.
This lecture brings together everything built up to this point. We are going to develop an intuition for why competitive markets are efficient using a model of profit maximizing sellers who compete in an auction market by setting reserve prices. In the previous lecture we saw how the profit maximization motive leads a seller with market power to choose an inefficient selling mechanism. This came in the form of a reserve price above cost. Here we begin by getting some intuition why competition should reduce the incentive to distort price in this way.
(This is probably the weak link in the whole class. I do not have a good idea of how to teach this and in fact I am not sure I understand it so well myself. This is the first place to work on improving the class next time. Any suggestions would be appreciated.)
Finally, we jump to a model with a large number of buyers and sellers all competing in a simultaneous ascending double auction. With so much competition, if sellers set reserve prices above their costs there will be
- no sellers who are doing better than if they just set the reserve price equal to cost
- a positive mass of sellers who would do strictly better by reducing their reserve price to equal their cost
In that sense it is a dominant strategy for all sellers to set reserve price equal to their cost. This equates the “supply” curve with the cost curve and produces the utilitarian allocation. Here are the notes.
We saw The Fantastic Mr Fox a few weeks ago. It was a thoroughly entertaining movie and I highly recommend it. But this is not a movie review. Instead I am thinking about movie previews and why we all subject ourselves to sitting through 10-plus minutes of previews.
The movie is scheduled to start at the top of the hour, but we all know that what really starts at the top of the hour are the previews and they will last around 10 minutes at least. Why don’t we all save ourselves 10 minutes of time and show up 10 minutes late?
Maybe you like to watch previews but I don’t and in any case I can always watch them online if I really want to. I will assume that most people would prefer to see fewer previews than they do.
One answer is that the theater will optimally randomize the length of previews so that we cannot predict precisely the true starting time of the movie. To guarantee that we don’t miss any of the film we will have to take the chance of seeing some previews. But my guess is that this doesn’t go very far as an explanation and anyway the variation in preview lenghts is probably small.
In fact, even if the theater publicized the true start time we would still come early. The reason is that we are playing an all-pay auction bidding with our time for the best seats in the theater. Each of us decides at home how early to arrive trading off the cost of our time versus the probability of getting stuck in the front row. The “winner” of the auction is the person who arrives earliest, the prize is the best seat in the theater, and your bid is how early to arrive. It is “all pay” because even the loser pays his bid (if you come early but not early enough you get a bad seat and waste your time.)
In an all pay auction bidders have to randomize their bids. Because if you knew how everyone else was bidding you would arrive just before them and win. But then they would want to come earlier too, etc. The randomizations are calibrated so that you cannot know for sure when to arrive if you want to get a good seat and the tradeoffs between coming earlier and later are exactly balanced.
As a result most people arrive early, sit and wait. Now the previews come in. Since we are all going to be there anyway, the theater might as well show us previews. Indeed, even people like me would rather watch previews than sit in an empty theater, so the theater is doing us a favor.
And this even explains why theater tickets are always general admission. Let’s compare the alternative. The theater knows we are “buying” our seats with our time. The theater could try to monetize that by charging higher prices for better seats. But it’s a basic principle of advertising that the amount we are willing to pay to avoid being advertised at is smaller than the amount advertisers are willing to pay to advertise to us. (That is why pay TV is practically non-existent.) So there is less money to be made selling us preferred seats than having us pay with our time and eyeballs.
Regular readers of this blog will know that I consider that a good thing.
The financial crisis is motivating a search for new models of asset markets and their interaction with the real economy. It seems obvious that, for example the housing bubble can only be explained by a model in which asset prices are bid up by the activity of highly optimistic investors or speculators. Models which build in these divergent beliefs (and not just differences in information) are, perhaps very surprisingly to outsiders, only recently coming to mainstream economic theory.
Alp Simsek asks whether the presence of optimistic traders can inflate the price of assets, say housing prices. It seems obvious, but remember that investment in housing is leveraged using collateralized loans where the house itself is the collateral. If the optimists are borrowing from the “realists” to buy houses at overinflated prices, and they are offering up the house as collateral, then surely the realists aren’t willing to lend?
Alp shows that this logic sometimes holds, but not always. And he formalizes a precise way of measuring optimism which determines whether the presence of optimists will inflate asset prices, or alternatively their optimism will be filtered due to realists’ witholding of credit.
Suppose that you are a realist and you are making a loan to me to purchase a house. A year later we will see whether housing prices have gone up or down. If they go up, I will pay off the loan and realize a profit. If they go down I will default on the loan. A key idea is to understand that the loan effectively makes us partners in the purchase of the house. I own it on the upside (and I pay you back your loan) and you own it on the downside. We pay for the house together too: you contribute the loan amount and I contribute the down pament.
The equilibrium price of the house will be determined by how much we, as partners, are willing to pay. I am an optimist and I would like to pay a lot for it, but I am financially constrained so my contribution to the total price is some fixed amount, my down payment. Thus, our total willingness to pay is determined by how much you are willing to pay to enter this partnership.
Now we can see how my optimism plays a role. Suppose I am more optimistic than you in the sense that I think there is a lower probability of default than you. It turns out this doesn’t make our willingness to pay any higher than it would be if I were a realist just like you. That’s because you own the house in the event of default so it’s the probability that you assign to default that enters into our total value, not the probability that I assign. It’s true that I assign a higher probability to the good event that the price goes up, but I am already putting all of my cash into the partnership. I can’t do anything more to leverage this form of optimism.
But suppose instead that the way in which I am more optimistic than you is slightly different. We both assign the same probability to default, i.e. the event that the price falls. Where we differ is in terms of our beliefs conditional on the price going up. In particular I think that conditional on the upside, the expected price increase is higher than you think it is. Now we have a new way to leverage our partnership. Since I expect to have a higher upside, I am prepared to offer you a higher payment in the event of that upside. (That is, I am willing to pay back a larger loan amount.) And the promise of that higher payment on the upside coupled with the same old house on the downside makes this a strictly more attractive partnership for you and you are willing to pay more to enter it. (That is, you are willing to loan more to me.)
Indeed these collateralized loans seem to be the ideal contracts for us to make the most of our differences in beliefs. And once we see how that works, it is easy to go from there to a theory of a dynamic housing bubble. Tomorrow there might be optimistic investors who will partner will creditors to bid up housing prices. Today, you and I might have differences in beliefs about the probability that those optimistic investors might materialize. If I am more optimistic than you about it, you and I can enter into a partnership which leverages our different beliefs about tomorrow’s differences in beliefs, etc.
There is an important thing to keep in mind when considering models with heterogenous beliefs. We don’t have a good handle on welfare concepts in these models. For example, in Simsek’s model the efficient allocation is to give the asset to the optimists. Indeed, the financial friction is only an impediment to achieving an efficient allocation. A planner, faced with the same constraint, would not do anything different than the market. If we apply standard welfare notions like this, then these models are not a good framework for discussing financial reform.
Here’s a purely self-interested rationale for affirmative action in hiring. An organization repeatedly considers candidates for employment. A candidate is either good or just average and there are minority and non-minority candidates. The quality of the candidate and his race are observable. The current members decide collectively whether to make a job offer to the candidate.
What’s not observable is whether the applicant is biased against the other race. A biased member prefers not to belong to an organization with members of the other race. In particular, if hired, he will tend to vote against hiring them.
Unbiased non-minority members of such an organization will optimally hold minority applicants to a lower quality standard, at least initially. The reason is simple. An organization with no minority members will have their job offers more often accepted by biased non-minority candidates who will then make it harder to hire high quality minority candidates in the future. Since bias is not observable, affirmative action is an alternative instrument to ensure that the organization is not hospitable to those who are biased.
The effect is weaker in the opposite direction. Even if there are minority applicants who are biased in favor of minorities, their effect on the organization’s decision-making will be smaller because they are in the minority. So at the margin there is a gain to practicing at least some affirmative action.
(This also explains why every economics department should have at least one structural and one reduced-form empirical economist.)
It’s trendy to get your economist on around the holidays and complain about the inefficiency of gift exchange. Giving money is a more efficient way to make the recipient better off. But that’s a fallacy that only trips up poser-economists. To a real economist, that’s like observing that eating an omelette is an inefficient way to get all of the nutrients we need in our breakfast. Yeah, so? That’s not why I ate it.
A real economist recognizes unregulated, voluntary exchange when he sees it. He doesn’t bother inventing some hypothetical motivation for the exchange because he understands revealed preference. If they are doing it voluntarily then it is efficient, regardless of what they think they are getting out of it. Indeed, the pure consumption value of buying a plaid sweater for somebody is a perfectly good motivation. And since the recipient voluntarily accepts the gift, even better. If there was a Pareto superior alternative they would have done that instead.
So this holiday, swat that poser economist in red off your left shoulder, hold hands with the real economist in white on your right shoulder and give to your hearts’ content. (Oh and I am very easy to shop for. Just don’t forget to include a gift receipt!)
Tyler Cowen, quoting Ezra Klein on “penalties” for failing to purchase private insurance:
If you don’t have employer-based coverage, Medicare, Medicaid, or anything else, and premiums won’t cost more than 8 percent of your monthly income, and you refuse to purchase insurance, at that point, you will be assessed a penalty of up to 2 percent of your annual income. In return for that, you get guaranteed treatment at hospitals and an insurance system that allows you to purchase full coverage the moment you decide you actually need it. In the current system, if you don’t buy insurance, and then find you need it, you’ll likely never be able to buy insurance again. There’s a very good case to be made, in fact, that paying the 2 percent penalty is the best deal in the bill.
Saddam promoted incompetents in his army deliberately, believing they would be less likely to sponsor a coup. There is a similar process that can operate within firms, the Peter Principle: If firms automatically promote the best performer at level k of the hierarchy to the level k+1, people will be promoted till they find their level of incompetence. Saddam’s promotion policy can be justified on rational choice grounds and similarly we might ask how firms can counteract the logic underlying the Peter Principle.
The New York Times magazine has a section on interesting ideas of the year. One of them concerns the Peter Principle. A group of Italian physicists did a computer simulation with various promotion policies. Random promotion outperformed a “promote the best” policy. It increases the chance that someone who is actually good at the job makes it to the next level. This seems pretty straightforward and eminently amenable to a simple analytical model. But peer review is even better than random promotion: ask the co-workers who might be good at the higher level job. If they have big incentives to lie, at worst you can ignore them and get random promotion as the optimal policy. Or better, share some of the rents from promoting the right person with the reviewers and get some useful information out of them.
These are old ideas from contract theory but we are clearly not doing a good job at getting our insights to the New York Times. On that note, let me congratulate Dan Ariely and his co-authors who have at least three of the best ideas of 2009. The experiment involving the drunks playing the ultimatum game was the most fun – won’t give the point away so you can enjoy it yourself! But it makes me think Jeff and I should do some experiments in our wine club. I wonder if we can get the NSF to support it so I can finally taste a Petrus.



