You are currently browsing the tag archive for the ‘economics’ tag.

China is threatening to cut off imports of American chicken, but poultry experts have at least one reason to suspect it may be an empty threat: Many Chinese consumers would miss the scrumptious chicken feet they get from this country.

“We have these jumbo, juicy paws the Chinese really love,” said Paul W. Aho, a poultry economist and consultant, “so I don’t think they are going to cut us off.”

The story is in the New York Times.

In previous lectures we looked at the design of mechanisms to allocate public and private goods in “small markets.”  In both cases we saw that incentive compatibility is a basic friction preventing efficiency.  But in the case of private goods we saw how that friction vanishes in larger markets.  In this lecture we show that the opposite occurs for public goods.  The inefficiency only gets worse as the size of the population served by a public good grows larger.  We are capturing the foundations of the free-rider problem.  This is another set of notes that I am particularly proud of becuase here is a completely elementary and graphical proof of a dominant-strategy version of the Mailath-Postlewaite theorem.

The conclusion we draw from this lecture is that the idea of “competition” that restored efficiency in markets for private goods cannot be harnessed for public goods and therefore some non-voluntary institution is necessary to provide these.  This gives an opportunity to have an informal discussion of the kinds of public goods that are provided by governments and the way in which government provision circumvents the constraints in the mechanism design problem (coercive taxation.)  The possibility of providing public goods by such means comes at the expense of losing the ability to aggregate information about the efficient level of the public good.

Here are the slides.

There is a carefully researched article appearing in the Huffington Post that says yes.

The Federal Reserve’s Board of Governors employs 220 PhD economists and a host of researchers and support staff, according to a Fed spokeswoman. The 12 regional banks employ scores more. (HuffPost placed calls to them but was unable to get exact numbers.) The Fed also doles out millions of dollars in contracts to economists for consulting assignments, papers, presentations, workshops, and that plum gig known as a “visiting scholarship.” A Fed spokeswoman says that exact figures for the number of economists contracted with weren’t available. But, she says, the Federal Reserve spent $389.2 million in 2008 on “monetary and economic policy,” money spent on analysis, research, data gathering, and studies on market structure; $433 million is budgeted for 2009.

All of the facts in this article are true.  Any academic economist sees first-hand the role the Fed has in supporting research in the area of monetary economics.  And it is easy to see how this article could lead an outsider to its conclusions.

Paul Krugman, in Sunday’s New York Times magazine, did his own autopsy of economics, asking “How Did Economists Get It So Wrong?” Krugman concludes that “[e]conomics, as a field, got in trouble because economists were seduced by the vision of a perfect, frictionless market system.”

So who seduced them?

The Fed did it.

I am not a macroeconomist and apart from an occasional free lunch I have never been the beneficiary of Fed research funding, so I easily could be out of the loop on this conspiracy but for what it is worth I don’t see any evidence of it.  All of the facts are true, but the conclusion follows from them only if you want it to.

I am sure it would be easy to compile a large list of papers funded by Fed research money that are critical of Fed monetary policy.

Here is some simple market power analysis of Apple’s use of the iPhone as a platform. Think of it as a device you have to buy in order to obtain services from sellers who use the iPhone platform.  The kinds of services you can buy are voice calls (currently from AT&T), music (currently from Apple via iTunes but also via third-parties like Pandora), and applications (from third-party developers via the app store.)

Apple controls the platform, so it can decide whether to provide a service itself (as with iTunes), and if not who provides each service, whether the provider will be exclusive (as with AT&T), and what price to exact from the transaction.  Of course, Apple also sells the handset to you and all of the above factors in to how much you are willing to pay for the iPhone.

Here is a basic principle of monopoly power that is central to these decisions. Whether Apple wants to exclusively provide a service or allow a competitive supply from third-parties depends on which of its customers will benefit from the service.  Suppose the service has similar benefits for all iPhone users.  Then Apple can allow the service to be competitively supplied (as with the App Store) and capture the benefit by raising the price of the iPhone.

On the other hand, if the service will most valuable to those iPhone users who already have a high willingness-to-pay for the phone (the so-called infra-marginal users,) then Apple wants to exclusively provide the service (or contract with an exclusive provider.)  The reason is that the price of the iPhone handset is determined by the marginal user.  In order to raise the price of the phone itself to capture the benefit to high-end users, too many marginal users would be price out of the iPhone and profits would go down.  Instead, Apple captures the value of high-end services (like 3G voice and data) by controlling the supply and pricing the service separately.

The cynical way to interpret this is to say that Apple’s exclusive contract with AT&T is simply a way to extract surplus from high-end users.  The more charitable interpretation is that without this exclusive contract, as a monopolist, Apple would have less incentive to make the phone compatible with high-end services.

When zero marginal cost is too steep:

Champagne producers agreed to pick 32% fewer grapes this year, leaving billions of grapes to rot on the ground, in a move to counter fizzling bubbly sales around the world amid the economic downturn.

Here is the story. (link fixed.)

The full subtitle is “A Sober (But Hopeful) Appraisal” and its an article just published in the American Economic Journal:  Microeconomics by Douglas Bernheim.  The link is here (sorry its gated, I can’t find a free version.)  Bernheim is the ideal author for such a critical review because he has one toe in but nine toes out of the emerging field of neuroeconomics.  For the uninitiated, neuroeconomics is a rapidly growing but somewhat controversial subfield which aims to use brain science to enrich and inform traditional economic methodology.

The paper is quite comprehensive and worth a read.  Also, check out the accompanying commentary by Gul-Pesendorfer, Rustichini, and Sobel. I may blog some more on it later, but today I want to say something about using neural data for normative economics.  That’s a jargony way to say that some neuroeconomists see the potential for a way to use brain data to measure happiness (or whatever form of well-being economic policy is supposed to be creating.)  If we can measure happiness, we can design better policies to achieve it.

Bernheim comes close to the critique I will spell out but goes in another direction when he discusses the identification problem of mapping neural observations to subjective well-being.  I think there is a problem that cuts even deeper.

Suppose we can make perfect measurements of neural states and we want to say which states indicate that the subject is happy.  How would we do that?  Since neural states don’t come ready-made with labels, we need some independent measurement of well-being to correlate with.  That is, we have to ask the subject.  Let’s assume we make sufficiently many observations coupled with “are you happy now?” questions to identify exactly the happy states.  What will we have accomplished then?

We will simply have catalogued and translated subjective welfare statements.  And using this catalogue adds nothing new.  Indeed if we later measure the subject’s neural state and after consulting the catalogue determine that he is happy, we will have done nothing more than recall that the last time he was in this state he told us he was happy.  We could have saved the effort and just asked him again.

More generally, any way of relating neural data to well-being presupposes a pre-existing means of measuring well-being.  Constructing a catalogue of correlations between these would only be useful if subsequently it were less costly to use neural measurements than the pre-existing method.  It’s hard to imagine what could be more costly than phsyically reading the state of your brain.

Jeff Miron writes

If the CIA had convincingly foiled terrorists acts based on information from harsh interrogations, the temptation to shout it from the highest rooftops would have been overwhelming.

Thus the logical inference is that harsh interrogations have rarely, if ever, produced information of value.

Without taking a stand on the bottom-line conclusion, I wonder about the intermediate claim.  If, for example, the CIA can document that torture produced critical intelligence, when would be the optimal time to release that information?  There are many reasons to wait until an investigation is already underway.

  1. If it was already in the public record, that would be in effect a sunk-cost for prosecutors and have less effect on marginal incentives to go forward.
  2. Public information maximizes its galvanizing effect when the public is focused on it.  Watercooler conversations are easier to start when it is common-knowledge that your cubicle-neighbor is paying attention to the same story you are.
  3. Passing time make even public information act less public.  Again, its not the information per se, but the galvanizing effect of getting the public focused on the same facts.  Over time these facts can be spun, not to mention simply forgotten.

I expect that the success stories are there as a kind of poison pill against the investigators.  They will reach a point where any further progress will require that the positive results will come to light.

U.S. producers are allowed to grow a certain amount of cane and beets each year for which they are guaranteed a price set by USDA. Beets get 55 percent of the total quota allotment and cane gets 45 percent. This works like a closed shop. If you want to start growing beets or cane for domestic sugar production, too bad. Catch 22: You only get to have a quota if you already have a quota. As for tariffs: The 2008 Farm Bill says that 85 percent of total sugar in the U.S. must be produced domestically, and only 15 percent can be imported. That 15 percent comes in through quotas distributed among about 20 countries. Any other sugar they want to send us is subject to high tariffs, except from Mexico. Under NAFTA, Mexico can export as much sugar to us as it wants to at the favored price. But imported sugar is never supposed to exceed 15 percent.

This interview covers a variety of angles including the history of sugar protection, high-fructose corn syrup, and the sugar “crisis.”

Suppose I want to send you an email and be sure that it will not be caught in your spam filter. What signal can I use to prove to you that my message is not spam? It must satisfy (at least) two requirements.

  1. It should be cheaper/easier for legitimate senders to use than for spammers.
  2. It should be cheap overall in absolute terms.

The first is necessary if the signal is going to effectively separate the spam from the ham. The second is necessary if the signal is going to be cheap enough for people to actually use it.

It is easy to think of systems that meet the first requirement but very hard to think of one that also satisfies the second. Now researchers at Yahoo! have an intriguing new idea that has received a great deal of attention, CentMail. According to this article, Yahoo! is planning to roll it out soon.

The sender pays a penny to have a trusted server to affix an electronic “stamp” to the message. Given that spammers could not afford to pay even one cent per message given the massive volume of spam, the receiver can safely accept any stamped message without running it through his spam filter.

Now here is the key idea. The penny is paid to charity. How could this matter? Because most people already make sizable donations to charity every year, they can simply route these donations through CentMail making the stamps effectively free. Thus, condition 2 is satisfied.

The first question that comes to mind is the titular one. (Settle down Beavis.) Remember, we still have to worry about condition 1 and whatever magic we use to make it cheap for legitimate email better not have the same effect on spam. But just like you, any spammer who makes donations to charity will be able to send a volume of spam for free. Apparently the assumption is that spam=evil and evildoers do not also contribute to charity. And we must also assume that Centmail doesn’t encourage entry into the spamming business by those marginal spammers for whom the gift to charity is enough to assuage their previous misgivings.

But these seem like reasonable assumptions. The more tricky issue is whether the 1 penny will actually deter spammers. It is certainly true that at current volume levels, the marginal piece of spam is not worth 1 penny. But for sure there is still a very large quantity of spam that is worth significantly more than 1 penny. For proof, just take a look in your snailbox. Even at bulk rates the cost of junk-mail advertising is several pennies per piece. With Centmail your Inbox would have at least as much stamped spam as the amount of junk mail in your snailbox.

This leads to the crucial questions. Any system of screening by monetary payments should be viewed with the following model in mind. First, ask how many pieces of spam you would expect to receive per day at the specified price. Next, ask how many spam you are willing to receive before you turn on your spam filter again. If the first number is larger than the second, then the system is not going to substitute for spam filtering and this undermines the reason to opt-in in the first place. For Centmail and me these numbers are 50 and 1.

Now continued spam filtering won’t necessarily destroy the system’s effectiveness. The stamp can be used in conjunction with standard filtering rules to reduce the chance your ham gets classified as spam. Then the question will be whether this reduction is enough to induce senders to adopt the setup costs of opting in.

Finally there is no reason theoretically that the total volume of spam would be reduced. Providing spammers with a second, higher class of service might only add to their demand.

Blame it on the binding constraint.

Let me explain.  Has it ever struck you how peculiar it is that the price of so much writing these days is zero?  No, I don’t mean that it is suprising that blogs don’t charge a price.  There is so much supply that competition drives the price down to zero.

What I mean is, why are so many blogs priced at exactly zero.  It would be a complete fluke for the optimal price of all of the blogs in the world to be at exactly the same number, zero.

And indeed the optimal price is not zero, in fact the optimal price is negative. Bloggers have such a strong incentive to have their writings read that they would really like to pay their readers.  But for various reasons they can’t and so the best they can do is set the price as low as possible.  That is, as it often happens, the explanation for the unlikely bunching of prices at the same point is that we are all banging up against a binding constraint.

(Why can’t we set negative prices?  First of all, we cannot verify that you actually read the article.  Instead we would have people clicking on links, pretending to read, and collecting money.  And even if we could verify that you read the article, most bloggers wouldn’t want to pay just anybody to read.  A blogger is usually interested in a certain type of audience.  A non-negative price helps to screen for readers who are really interested in the blog, usually a signal that the reader is the type that the blogger is after.)

Now, typically when incentives are blunted by a binding constraint, they find expression via other means, distortionary means.  And a binding price of zero is no different.  Since a blogger cannot lower his price to attract more readers, he looks for another instrument, in this case the quality of the writing.

Absent any constraint, the optimum would be normal-quality writing, negative price. (“Normal quality” of course is blogger-specific.)  When the constraint prevents price from going negative, the response is to rely more heavily on the quality variable to attract more readers.  Thus quality is increased above its unconstrained optimal point.

So, the next time you are about to complain that the blogs you read are too interesting (at the margin), remember this, grin and bear it.

Via MR, this article describes the obstacles to a market for private unemployment insurance.  Why is it not possible to buy an insurance policy that would guarantee your paycheck (or some fraction of it) in the event of unemployment?  The article cites a number of standard sources of insurance market failure but most of these apply also to private health insurance, and other markets and yet those markets function.  So there is a puzzle here.

The main friction is adverse selection.  Individuals have private information about (and control over!) their own likelihood of becoming unemployed.  The policy will be purchased by those who expect that they will become unemployed.  This makes the pool of insured especially risky, forcing the insurer to raise premiums in order to avoid losses. But then the higher premiums causes a selection of even more risky applicants, etc.  This can lead to complete market breakdown.

In the case of unemployment insurance there is a potential solution to this problem which borrows from the idea of instrumental variables in statistics.  (Fans of Freakonomics will recognize this as one of the main tools in the arsenal of Steve Levitt and many empirical economists.)  The idea behind instrumental variables is that it sidesteps a sample selection problem in statistical analysis by conditioning on a variable which is correlated with the one you care about but avoids some additional correlations that you want to isolate away.

The same idea can be used to circumvent an adverse selection problem.  Instead of writing a contract contingent on your employment outcome, the contract can be contingent on the aggregate unemployment rate.  You pay a premium, and you receive an adjustment payment (or stream of payments) when the aggregate unemployment rate in your locale increases above some threshold.

Since the movements in the aggregate unemployment rate are correlated with your own outcome, this is valuable insurance for you.  But, and this is the key benefit, you have no private information about movements in the aggregate unemployment rate.  So there is no adverse selection problem.

The potential difficulty with this is that there will be a lot of correlation in movements in unemployment across locations, and this removes some of the risk-sharing economies typical of insurance.  (With fire insurance, each individual’s outcome is uncorrelated with everyone else, so an insurer of many households faces essentially no risk.)

In the last lecture we demonstrated that there was no way to efficiently provide public goods, whether via a market or any other institution.  Now we turn to private goods.

We start with a very simple example: bilateral trade.  A seller holds an object that is valued by a potential buyer.  We want to know how to bring about efficient trade:   the seller sells the object to the buyer if, and only if, the buyer’s willingness to pay exceeds the seller’s.

We first analyze the problem using the Vickrey-Clarke-Groves Mechanism.  We see that the VCG mechanism, while efficient, is not feasible because it would require a payment scheme which results in a deficit:  the buyer pays less than the seller should receive.

Then, following the lines of the public goods problem from the previous lecture we show that in fact there is no mechanism for efficient trade. This is the dominant strategy version of the Myerson-Satterthwaite theorem. In fact, we show that the best mechanism among all dominant-strategy incentive compatible and budget balanced mechanisms (i.e. the second-best mechanism) takes a very simple form.  There is a price fixed in advance and the buyer and seller simply announce whether they are willing to trade at that price.

We see the first emergence of something like a market as the solution to the optimal design of a trading institution. We also see that markets are not automatically efficient even when there are no externalities, and goods are private.  There is a basic friction due to information and incentives that constrains the market.

Next we consider the effects of competition.  Our instincts tell us that if there are more buyers and more sellers, the inefficiency will be reduced.  By a series of arguments I show the first sense in which this is true.  There exists a mechanism which effectively makes sellers compete with one another to sell and buyers compete with one another to buy.  And this mechanism improves upon the fixed price mechanism because it enables the traders themselves to determine the most efficient price.  I call this the price discovery mechanism (it is really just a double auction.)

Finally, in one of the best moments of the class, what was previously some random plots of values and costs on the screen coalescees into supply and demand curves and we see how this price discovery mechanism is just another way of seeing a competitive market.  This is the second look at how markets emerge from an analytical framework that did not presuppose the existence of markets at the beginning.

Here are the notes.

How do you cut the price of a status good?

Mr. Stuart is among the many consumers in this economy to reap the benefits of secret sales — whispered discounts and discreet price negotiations between customers and sales staff in the aisles of upscale chains. A time-worn strategy typically reserved for a store’s best customers, it has become more democratized as the recession drags on and retailers struggle to turn browsers into buyers.

Answer:  you don’t, at least not publicly.  Status goods have something like an upward sloping demand curve.  The higher is the price, the more people are willing to pay for it.  So the best way to increase sales is to maintian a high published price but secretly lower the price.

Of course, word gets out.  (For example, articles are published in the New York Times and blogged about on Cheap Talk.)  People are going to assign a small probability that you bought your Burberry for half the price, making you half as impressive.  An alternative would be to lower the price by just a little, but to everybody.  Then everybody is just a little less impressive.

So implicitly this pricing policy reveals that there is a difference in the elasticity of demand with respect to random price drops as opposed to their certainty equivalents.  Somewhere some behavioral economists just found a new gig.

The City of Oakland, Calfornia has become the first city to specifically tax the sale of medical marijuana. And The State of California is considering legislation to legalize the sale of (non-medical) marijuana an impose a tax of $50 per ounce.  My sources estimate the current retail price to be about $300 per ounce.  This is an early stage but that would put the tax at roughly the same rate as cigarettes in California (87 cents per pack which retails at around $7).

Other taxes are being considered:

Republican state Sen. Jack Murphy’s proposed “pole tax” would have charged patrons of strip clubs a $5 entrance fee. The bill was not approved.

Quoting an interview with a Somali Pirate in Wired. (Tricorne tip:  Snarkmarket.)

1. Bargaining Power of Pirates

Often we know about a ship’s cargo, owners and port of origin before we even board it. That way we can price our demands based on its load. For those with very valuable cargo on board then we contact the media and publicize the capture and put pressure on the companies to negotiate for its release.

2. Bargaining Power of Foreign Negotiators

Armed men are expensive as are the laborers, accountants, cooks and khat suppliers on land. During long negotiations our men get tired and we need to rotate them out three times a week. Add to that the risk from navies attacking us and we can be convinced to lower our demands.

3. Intensity of Competitive Rivavlry

The key to our success is that we are willing to die, and the crews are not.

4. The Value of Hostages

Hostages — especially Westerners — are our only assets, so we try our best to avoid killing them.  It only comes to that if they refuse to contact the ship’s owners or agencies.  Or if they attack us and we need to defend ourselves.

5. The Threat of the Navy

Whenever we reach an agreement for the ransom, we send out wrong information to mislead the Navy about our exact location. We don’t want them to know where our land base is so that our guys on the ship can manage a safe escape. We have to make sure that the coast is clear of any navy ships before we leave. That said, there is no guarantee that we won’t be shot or arrested, but this has only happened once when the French Navy captured some of our back up people after the pirates left the Le Ponnant.

At Volokh Conspiracy, Ilya Somin writes:

This week, many of my former students will be undergoing the painful experience of taking the Virginia bar exam. My general view on bar exams is that they should be abolished, or at least that you should not be required to pass one in order to practice law. If passing the exam really is an indication of superior or at least adequate legal skills, then clients will choose to hire lawyers who have passed the exam even if passage isn’t required to be a member of the bar. Even if a mandatory bar exam really is necessary, it certainly should not be administered by state bar associations, which have an obvious interest in reducing the number of people who are allowed to join the profession, so as to minimize competition for their existing members.

What changes would we see if it was no longer necessary to pass the bar in order to practice law?  We can analyze this in two steps.  First, hold everything else about the bar exam fixed and ask how the market will react to making it voluntary.

The first effect would be to encourage more entry into the profession.  Going to law school is not as much of a risk if you know that failing the bar is not fatal.  There would be massive entry into specialized law education.  Rather than go to a full-fledged law school, many would take a few practical courses focused on a few services.  Traditional law schools would respond by becoming even more academic and removed from practice.

Eventually the bar will be taken only by high-level lawyers who work in novel areas and whose services require more creativity and less paper pushing.  But the bar will no longer be the binding entry barrier to these areas.  The economic rationale for the entry barrier is to create rents for practicing lawyers so that they have something to lose.  This keeps them honest and makes their clients trust them.

Now reputation will provide these rents. Law firms, even moreso than now, will consist of a few generalist partners who embody all of the reputation of the firm and then an army of worker-attorneys.  All of the rents will go to the partners.  The current path of associate-promoted-to-partner will be restricted to only a very small number of elites.

As a result of all this, competition actually decreases at the high end.

All of these changes will alter the economics of the bar exam itself.  Since the bar is no longer the binding entry barrier, bar associations become essentially for-profit certification intermediaries.   This pushes them either in the direction of becoming more selective, extracting from further increases in rents at the high end or less selective and becoming effectively a driver’s license that everyone passes (and pays a nominal fee.)  Which direction is optimal depends on elasticities.  Probably they will offer separate high-end and low-end exams.

My bottom line is that banning the bar increases welfare but perhaps for different reasons than Somin has in mind.  Routine services will become more competitive and this is good.  Increased concentration at the high end is probably also good because market power means less output and for the kinds of lawyering they do, reduced output is welfare-improving.

At Marginal Revolution Alex Tabarrok takes an interesting perspective on the minimum wage increase.  Consider an employer who pays more than the minimum wage.  How would that employer be affected?

Indeed, these employers will benefit from an increase in the minimum wage because it will raise the costs of their rivals.

(Based on this conclusion, he looks suspiciously on claims by some employers that they are cheering the minimum wage for moral reasons.)

While it is true that a rise in the minimum wage will raise the costs of their rivals, this is not the end of the story, and looking one step further can reverse the conclusion.   Firms have to compete for workers and if my rival must pays a higher wage, then my own workers now find her to be a more attractive employer at the margin.  To restore the balance, I will typically have to raise my own wage.

For example, this would be true if I have to compete with my rival for workers but workers have a higher disutility of working for me.

Now this assumes that the minimum wage does not create a shortage of jobs for my rival, i.e. excess supply of labor.  There is good empirical evidence that the minimum wage does not have this effect.

However, if the rival has elastic demand for labor, then the conclusion can be reversed yet again.  Increasing the minimum wage cuases the rival to employ fewer workers which increases labor supply for me and allows me to lower my wage.  So in addition to raising my rival’s costs, the minimum wage lowers my own costs.

Note however that in the equilibrium of this last model there is a shortage of minimum wage jobs.  This means that the marginal high-wage worker would prefer to quit and go work for the minimum-wage firm but is unable to because there are no vacancies there.  That doesn’t sound very realistic.

One of the simplest and yet most central insights of information economics is that, independent of the classical technological constraints, transactions costs, trading frictions, etc.,  standing in the way of efficient employment of resources is an informational constraint.  How do you find out what the efficient allocation is and implement it when the answer depends on the preferences of individuals?  Any institution, whether or not it is a market, is implicitly a channel for individuals to communicate their preferences and a rule which determines an allocation based on those preferences. Understanding this connection, individuals cannot be expected to faithfully communicate their true preferences unless the rule gives them adequate incentive.

As we saw last time there typically does not exist any rule which does this and at the same time produces an efficient allocation.  This result is deeper than “market failure” because it has nothing to do with markets per se. It applies to markets as well as any other idealized institution we could dream up.

So how are we to judge the efficiency of markets when we know that they didnt have any chance of being efficient in the first place?  That is the topic of this lecture.

Let’s refer to the efficient allocation rule as the first-best. In the language of mechanism design the first-best is typically not feasible because it is not incentive-compatible. Given this, we can ask what is the closest we can get to the first best using a mechanism that is incentive compatible (and budget-balanced.)  That is a well-posed constrained optimization problem and the solution to that problem we call the second best.

Information economics tells us we should measure existing institutions relative to the second best.  In this lecture I demonstrate how to use the properties of incentive-compatibility and budget balance to characterize the second-best mechanism in the public goods problem we have been looking at.  (Previously the espresso machine problem.)

I am particularly proud of these notes because as you will see this is a complete characterization of second-best mechanisms (remember: dominant strategies)for public goods entirely based on a graphical argument.  And the characterization is especially nice:  any second-best mechanism reduces to a simple rule where the contributors are assigned ex ante a share of the cost and asked whether they are willing to contribute their share.  Production of the public good requires unanimity.

For example, the very simple mechanism we started with, in which two roomates share the cost of an espresso machine equally, is the unique symmetric second-best mechanism.  We argued at the beginning that this mechanism is inefficient and now we see that the inefficiency is inevitable and there is no way to improve upon it.

Here are the notes.

The stakes are formidable. Experts estimate that contraband accounts for 12 percent of all cigarette sales, or about 657 billion sticks annually. The cost to governments worldwide is massive: a whopping $40 billion in lost tax revenue annually. Ironically, it is those very taxes — slapped on packs to discourage smoking — that may help fuel the smuggling, along with lax enforcement and heavy supply. (A pack of a leading Western brand that costs little more than $1 in a low-duty country like Ukraine can sell for up to $10 in the U.K.) That potential profit offers a strong incentive to smugglers.

I have argued that legalization of marijuana would not ease the drug war, and might even intensify it.  This series of articles about black market tobacco provides a possible preview of the incentives that would be created by a regulated and taxed market for marijuana.  Legalization may just replace the current war on drugs with a battle to protect tax revenues on legal marijuana and to protect monopoly power by legitimate producers.

In sync with increased regulation and taxes on tobacco in recent years, the black market has thickened.

Yet, despite the exposés, the lawsuits, and the settlements, the massive trade in contraband tobacco continues unabated. Indeed, with profits rivaling those of narcotics, and relatively light penalties, the business is fast reinventing itself. Once dominated by Western multinational companies, cigarette smuggling has expanded with new players, new routes, and new techniques. Today, this underground industry ranges from Chinese counterfeiters that mimic Marlboro holograms to perfection, to Russian-owned factories that mass produce brands made exclusively to be smuggled into Western Europe. In Canada, the involvement of an array of criminal gangs and Indian tribes pushed seizures of contraband tobacco up 16-fold between 2001 and 2006.

Salakot salute:  Terry Gross.

Most of classical economic theory is built on the foundation of revealed preference.  The guiding principle is that, whatever is going on inside her head, an individual’s choices can be summarized as the optimal choice given a single, coherent, system of preferences.  And as long as her choices are consistent with a few basic rationality postulates, axioms, this can be shown mathematically to be true.

Most of modern behavioral economics begins by observing that, oops, these axioms are fairly consistently violated.  You might say that economists came to grips with this reality rather late.  Indeed, just down the corridor there is a department which owes its very existence to that fact:  the marketing department.  Marketing research reveals counterexamples to revealed preference such as the attraction effect. Suppose that some people like calling plans with lots of free minutes but high fees (plan A) and others like plans with fewer free minutes but lower fees (plan B).  If you add a plan C which is worse on both dimensions than plan A, suddenly everybody likes plan A over plan B because it looks so much better by comparison to plan C.

The compromise effect is another documented violation.  Here, we add plan C which has even more free minutes and lower fees than B.  Again, everyone starts to prefer B over A but now because B is a compromise between the extreme plans A and C.

Do we throw away all of economic theory becuase this basic foundation is creaking?  No, there has been a flurry of research recently that is developing a replacement to revealed preference which posits not a single underlying preference, but a set of preferences and models individual choices as the outcome of some form of bargaining among these multiple motivations.  Schizonomics.

Kfir Eliaz and Geoffrey de Clippel have a new paper using this approach which provides a multiple-motivation explanation for the attraction and compromise effects.  Add this to papers by Feddersen and Sandroni, Rubinstein and Salant, Ambrus and Rosen, Manzini and Mariotti, and Masatilioglu-Nakajima-Ozbay and one could put together a really nice schizonomics reading list.

Goldman Sachs and JP Morgan have quickly returned the money they got from the government.  The CEO Of JP Morgan sees it as  a badge of honor:

Amid the surge, Jamie Dimon, JPMorgan’s chief executive, has cemented his status as one of America’s most powerful and outspoken bankers. He has vocally distanced himself from the government’s financial support, calling the $25 billion in taxpayer money the bank received in December a “scarlet letter” and pushing with Goldman Sachs, Morgan Stanley and others to repay the money swiftly. Those three banks repaid the money last month.

Whether a bank returns the money quickly and even if they never got any of it, the bank gained from the intervention.  Why?  Because if AIG, to name the key firm, had gone down, the chain of interlinked insurance contracts that it sold would have been worth nothing.  This would impact the whole financial system, including Goldman Sachs etc.  That’s why credit was coming to a halt as no-one knew the value of the insurance contracts that were supposedly providing a safety net.

So, taxpayers bailing out AIG helped all these banks, even those who did not participate in the government program.  (It’s a classic free-rider problem in public good provision.) So, where’s my Goldman bonus since I helped to save the financial system?

Senator Kaufman from Delaware asked Judge Sotomayor about the Leegin case which overturned the per se illegality of resale price maintanence.

Senator Kaufman: But what’s the role of the court in using economic theory to interpret acts of Congress?

SOTOMAYOR: Well, you don’t use economic theory to determine the constitutionality of congressional action. That is a different question, I think, than the one that Leegin addressed.

What Leegin addressed was how the court would apply congressional act, the antitrust laws, to a factual question before it. And that’s a different issue, because that doesn’t do with questioning the economic choices of Congress. That goes to whether or not, in reviewing the action of a particular defendant, what view the court is going to apply to that activity.

SOTOMAYOR: In the Leegin case, the court’s decision was, “Look, we have prior case law that says that this type of activity is always anti-competitive,” and the court, in reconsidering that issue in the Leegin case, said, “Well, there’s been enough presented in the courts below to show that maybe it’s not in — some activities anti- competitive. And so we’re not going to subject it to an absolute bar; we’re going to subject it to a review under rule of reason.”

That’s why I said it’s not a question of questioning Congress’ economic choices or the economic theories that underlay its decisions in a legislation. They weren’t striking down the antitrust laws. What the Court was trying to do was it figure out how it would apply that law to particular set of facts before it.

Remember the joke about the man who asks a woman if she would have sex with him for a million dollars? She reflects for a few moments and then answers that she would. “So,” he says, “would you have sex with me for $50?” Indignantly, she exclaims, “What kind of a woman do you think I am?” He replies: “We’ve already established that. Now we’re just haggling about the price.” The man’s response implies that if a woman will sell herself at any price, she is a prostitute. The way we regard rationing in health care seems to rest on a similar assumption, that it’s immoral to apply monetary considerations to saving lives — but is that stance tenable?

A brilliant article on the basic economics of scarcity, with a focus on the current health care debate.

The anti-trust division of the Justice Department and FTC are reviewing potentially anti-competitive practices by the dominant providers of wireless services.  In my previous post on the subject I discussed the theory of exclusive contracts as illegal barriers to entry.  In this post I will take up the conventional argument that an exclusive agreement can spur investment by providing a guaranteed return.

AT&T absorbed significant upfront costs by developing and expanding their 3G network at a time when only the Apple iPhone was capable of using its higher speeds and advanced capabilities.  AT&T and Apple entered into a relationship in which AT&T would be the exclusive provider of 3G wireless services for the iPhone and this guarantees AT&T a stream of revenue which would eventually recoup their investment and turn a profit.  If this exclusive contract were to be scrutinized by anti-trust authorities, AT&T could be expected to argue that without protection from future competition these revenues would not be guaranteed and they would not have been able to make the investment in the first place.

Putting this argument in its proper light requires paying close attention to the distinction between total profits and incentives at the margin.  To justify an exclusive contract on efficiency grounds it is not enough to show that exclusivity raises total profits, it must be shown that in addition it adds to the marginal incentives to invest in the new technology.

Imagine that AT&T has no contract with Apple.  The worry is that a competitor will develop a rival 3G network and compete with AT&T for Apple’s business.  If this happens, AT&T is left out in the cold and makes a loss on its investment.  On the other hand, if AT&T has a contract to be the exclusive iPhone 3G provider, then Apple cannot unilateraly break this contract and deal with the new entrant.  Of course if the new provider was a more attractive partner, perhaps because of lower costs or a better technology, Apple could try to buy out of the contract, but AT&T would not accept any payment less than what it would get from insisting on the exclusive contract.

Thus, with an exclusive contract, when a competitor appears AT&T is guaranteed a minimal payoff equal to the total revenue it would earn if it rejected any buyout and insisted on the exclusive deal.  This is the basis of the conventional intuition supporting exclusive dealing.  But what exactly determines this payoff?

The key to understanding this is to consider that once the contract is in place and AT&T’s investment is sunk, the two parties are in a situation of bilateral monopoly.  There is some total surplus that will be generated from their mutual agreement and this surplus will be divided between the two through some bargaining.  The exclusive contract determines the status quo from which they will bargain and the amount of surplus to divided is the gain from Apple switching to the new rival. Investment by AT&T improves the value to Apple from dealing with AT&T and while this raises AT&T’s status quo it also reduces the gain to switching to the new rival, and hence the bargaining surplus, by exactly the same amount.  In the resulting bilateral monopoly bargaining, these effects exactly counteract one another and the net result is that the contract adds nothing to AT&T’s marginal incentives to invest.

This is the insight of Segal and Whinston in their RAND paper “Eclusive Contracts and Protection of Investment.”

Ultimately, an exclusive contract only shifts surplus to the investing party in a lump sum, independent of the level of investment.  There are two implications of this.

  1. It cannot be argued that exclusive contracts are necessary for protection of investments.  The shifting of surplus could be just as easily achieved by replacing the exclusive contract with a lump-sum cash payment to AT&T.
  2. However, the argument described here cannot be the decisive plank in any anti-trust litigation.  If an anti-trust investigation were to go forward, AT&T/Apple could argue that instead of using the lump-sum payment (which may have been complicated if the size of the payment required is large) they chose to use an exclusive contract to do the surplus shifting.   That is, just noticing that exclusive contracts are not necessary, does not imply that they are not useful.  At best, there would have to be a finding that the exclusive contract had some other anti-competitive intent, and the arguments here would just be used to disarm any defense on the basis of necessity.

CAPTCHAs are everywhere on the web now.  They are the distorted text that you are asked to identify before being allowed to register for an account.  The purpose is to prevent computer programs from gaining quick access to many accounts for nefarious purposes (spam for example.)

reCAPTCHA piggy-backs on CAPTCHA.  You are asked to identify two words. The first is a standard CAPTCHA.  If you enter the correct word you identify yourself as a human.  The second is a word that has been optically scanned from a book that is being digitized.  It has found its way into this reCAPTCHA because the computer doing the optical character recognition was not able to identify it.  If you have identified yourself as a human via the first CAPTCHA, your answer to the second word is assumed to be correct and used in the digital translation.  You are digitizing the book.

According to Wikipedia 20 years of the New York Times archive has been digitized with the help of reCAPTCHA.  And, “provides about the equivalent of 160 books per day, or 12,000 manhours per day of free labor.”

The first reaction to this is obvious.  The labor is not free.  In fact it costs exactly 12,000 man hours.  Lots of things can be produced with 12,000 man hours. Lots of leisure can be consumed in 12,000 hours.  Is digitizing the New York Times the best use of this people-time?  On top of that the reCAPTCHA is a tax which reduces the quantity of online accounts transacted and that is a deadweight loss.

But it is just a few seconds of your time right?  Something about that seems to change the calculation.  I bet most people would say that they don’t mind giving away two seconds of their time.  Part of this is due to an illusion of marginal vs total.  People are tempted to treat the act as a gift of two seconds of their time in return for a whole digitized library.  But in fact they are giving away two seconds of their time for one digitized word.

A second part of this is due to a scale illusion. You may successfully convince said reCAPTHArer that she is just getting a tiny fraction of the book for her two seconds but she will probably still say that she is happy with that.  But if you ask her whether she is willing to contribute 1000 seconds for 500 words, probably not.  And, to take increasing marginal costs out of the question, if you asked her whether she thought digitizing the New York Times is worth how many thousands of woman-hours of (dispersed) ucompensated labor she again might start to see the point.

But still, not everybody.  And I think there must be some sound rationale underneath this.  I would not argue that digitizing books is the necessarily the highest priority public good, but the mechanism is inherently linked to deciphering words.  True, we could require everyone who signs up at Facebook to donate 1 penny to fight global warming but A) it is never possible to know exactly what “1 penny toward fighting global warming” means whereas there is no way to redirect my contribution if I decipher a word.  That is not a liquid asset.  And B) two seconds of most people’s time is worth less than 1 penny (we are talking about Facebook users remember) and we don’t have a micro-payments system in place to go down to fractions of pennies.

Perhaps what we have here is a unique opportunity to utilize a public-goods contribution mechanism that transparent and non-manipulable and guarantees to each contributor that he will not be free-ridden on:  everyone else is committed to the same contribution.

After showing how the Vickrey auction efficiently allocates a private good we revisit some of the other social choice problems discussed at the beginning and speculate how to extend the Vickrey logic to those problems.  We look at the auction with externalities and see how the rules of the Vickrey auction can be modified to achieve efficiency.  At first the modification seems strange, but then we see a theme emerge.  Agents should pay the negative externalities they impose on the rest of society (and receive payment in compensation for the postive externalities.

We distill this idea into a general formula which measures these externalities and define a transfer function according to that formula.  The resulting efficient mechanism is called the Vickrey-Clarke-Groves mechanism.  We show that the VCG mechanism is dominant-strategy incentive compatible and we show how it works in a few examples.

We conclude by returning to the roomate/espresso machine example.  Here we explicitly calculate the contributions each roomate should make when the espresso machine is purchased.  We remind ourselves of the constraint that the total contributions should cover the cost of the machine and we see that the VCG mechanism falls short.  Next we show that in fact the VCG mechanism is the only dominant-strategy efficient mechanism for this problem and arrive at this lecture’s punch line.

There is no efficient, budget-balanced, dominant-strategy mechanism.

Here are the slides.

The Unbundled Economy: It’s one of the implications of (my guess at, given that I haven’t actually read) Free, its apparently what Tyler Cowen is talking about in his new book.  As the price of transmitting small chunks of information crashes to zero, the efficient market structure no longer involves assembly and sale of bundles of chunks, but instead sale of the chunks themselves and after-market assembly.

Case in point, the porn industry (which is pretty much always at the leading edge of structural change.)

Vivid, one of the most prominent pornography studios, makes 60 films a year. Three years ago, almost all of them were feature-length films with story lines. Today, more than half are a series of sex scenes, loosely connected by some thread — “vignettes” in the industry vernacular — that can be presented separately online. Other major studios are making similar shifts.

In lieu of plot, there are themes. Among the new releases from New Sensations, a studio that makes 24 movies a month, is “Girls ’n Glasses,” made up of scenes of women having sex while wearing glasses.

But old habits die hard, even in the porn world:

“The feature is not as big a part of the industry today,” Mr. Orenstein said. But he says he still plans two to three bigger-budget releases each year, including the recently shot “2040,” which is about the pornography business of the future. Mr. Orenstein described the movie as “an almost Romeo-and-Juliet story between an aging porn star and a cyborg.”

As a part of a broader revival of Section 2 of the Sherman Act, the anti-trust division of the Department of Justice, under Obama appointee Christine Varney, has opened a review of potentially anti-competitive practices by the dominant telcom providers.  One specific issue that has received attention is exclusionary contracting between wireless carriers (AT&T) and handset manufacturers (Apple iPhone.) The FTC is reportedly also exploring these contracts.  Exclusive contracts bind a manufacturer’s handsets to specific carriers thereby hindering or preventing end-users from migrating to other carriers.  The widespread nature of these contracts may create a barrier against entry by new, smaller wireless providers who cannot offer their users handsets that compete with the top models.

The review is reported to be at an early stage and may not lead to a formal investigation, but as this develops there are a few basic economic arguments to keep in mind.  To start with, there is the benchmark “Chicago School” view which starts with the observation that exclusionary contracts require the voluntary agreement of the handset manufacturers.  The manufacturers internalize the costs of the entry barrier because without entry they will have fewer competitive carriers to sell their phones to.  Therefore, exclusive contracts must compensate manufacturers for this loss impying that these contracts will be in place only when the total surplus from exclusion exceeds the cost, i.e. when it is efficient. The Chicago argument is a longstanding pillar of regulatory policy that still holds sway today.  From the article:

Jon Muleta, former wireless bureau chief of the FCC, said exclusive handset deals won’t be an issue the government can pursue on antitrust grounds unless major handset makers say they’re being forced into the deals. “The equipment providers enter into these deals willingly,” Mr. Muleta said.

The Chicago argument ignores the costs to end users from reduced competition in wireless service.  It would apply only if manufacturers internalize all of the benefits to consumers from increased competition. But under any reasonable model of the wireless market structure, end-user consumer surplus would increase with more competition for wireless service and this becomes an externality relative to the parties in the Chicago bargain.

Secondly, the Chicago argument has been discredited as it takes a naive view of the way contract negotiation would work.  Implicitly, the Chicago argument assumes that handset manufacturers must be compensated at least what they would earn if entry were to occur.  But scale economies imply that a new carrier will enter only if sufficiently many, or sufficiently large, manufacturers remain free of exclusive deals.  The dominant carriers can use a “divide and conquer” strategy which exploits the difficulty for handset manufacturers to coordinate severing their exclusive deals.  Without this coordinated threat, manufacturers cannot extract the compensation envisioned in the Chicago argument, and again efficiency breaks down.

The definitive references here are Rasmusen, Rasmeyer,  and Wiley “Naked Exclusion” and a follow-on comment by Segal and Whinston, both in the American Economic Review.

There is a separate defense of exclusive contracts, often cited and also reflected in the article.

Paul Roth, AT&T’s president of retail sales and service, told Congress last month that the billions of dollars the company invests in its network and services would be put at risk if government were to “impose intrusive restrictions on these services or the way that service providers and manufacturers collaborate on next-generation devices.” Mr. Roth said there is plenty of competition and innovation in the wireless industry.

AT&T’s tremendous investment in its 3G network will pay off only because of its exclusive deal with Apple to market the iPhone.  Thus, it is often argued that exclusive contracts are in fact pro-competitive as they reward investment with profits that would otherwise be subject to hold-up or competed away.  I will take up this argument in a subsequent post.

Here is an excellent example of social choice paradoxes in practice:  the voting system for the Olympic Venue.  The article illustrates cycles, failure of unanimity and violations of independence of irrelevant alternatives.  A great teaching aid, I will certainly be using it next time I teach my intermediate micro course.  I thank Taresh Batra for the pointer.

By the way, there is another perfect social choice example from the olympics.  In the 2002 women’s figure skating competition, Michelle Kwan was leading Sarah Hughes when the final skater, Irina Slutskaya took the ice.  Slutskaya put in a sub-par performance which was nevertheless good enough to surpass Kwan.  But the real suprise was that this performance by Slutskaya reversed the ranking of Kwan and Hughes so that Hughes leaped ahead of both Kwan and Slutskaya and won the gold.  In the end, Hughes took the gold, Slutskaya took silver, and Kwan went home with the bronze medal.  Here is a an old story.

There has been a run on one of the largest banks in an economics-themed online role-playing game called Eve.  The event merited an article at the BBC.  The run was triggered when Ricdic, an executive of the bank made off with a large sum of virtual lucre and exchanged it for real-world cash.

Eve Online has about 300,000 players all of whom inhabit the same online universe. The game revolves around trade, mining asteroids and the efforts of different player-controlled corporations to take control of swathes of virtual space.

It has now emerged that Ricdic used the cash to put down a deposit on a house and to pay medical bills.

“I’m not proud of it at all, that’s why I didn’t brag about it,” Ricdic told Reuters. “But you know, if I had to do it again, I probably would’ve chosen the same path based on the same situation.”

Apparently, the bank had tremendous reserves and has so far withstood the run.  Here is more information.  Either real-world bank regulators have something to learn from Eve or the other way around because here is Ricdic’s comeuppance:

Ricdic has now been thrown out of the game as trading in-game cash for real money is against Eve Online’s terms and conditions.

The rules governing play within Eve would not have sanctioned Ricdic if he had simply stolen the cash and used it in the game, nor if he had bought kredits with real dollars.

Fedora Flourish:  BoingBoing