You are currently browsing the tag archive for the ‘incentives’ tag.

In a much-discussed post at one of my favorite blogs, Language Log, Mark Liberman christens a new game:

We might call this the Pundit’s Dilemma — a game, like the Prisoner’s Dilemma, in which the player’s best move always seems to be to take the low road, and in which the aggregate welfare of the community always seems fated to fall. And this isn’t just a game for pundits. Scientists face similar choices every day, in deciding whether to over-sell their results, or for that matter to manufacture results for optimal appeal.

(Aside on the game name game:  when I was a first-year PhD student at Berkeley, Matthew Rabin taught us game theory. As if to remove all illusion that what we were studying was connected to reality, every game we analyzed in class was given a name according to his system of “stochastic lexicography.”  Stochastic lexicography means randomly picking two words out of the dictionary and using them as the name of the game under study.  So, for example, instead of studying “job market signaling” we studied something like “rusty succotash.”  I wonder if any of our readers remember some of the game names from that class.)

(Stay tuned for my next Matthew Rabin story which will involve a hackey sack and a bodily fluid.)

There is indeed a strong incentive for pundits to distort what they say, and it has the flavor of contrarianism. Its based on an old paper by Prendergast and Stole (requires JSTOR sorry.  Support Open Access publishing.)  Suppose that what pundits want is to convince the world that they are smart.  (Perhaps they want to influence policy.  They will be influential later only if they can prove they are smart today.  So today the details of what they are saying matters less than whether what they are saying is perceived to be smart.)

The thing about being really smart is that it means you are talking to people who aren’t as smart as you. (Sandeep faces this problem all the time.)  So they can’t verify whether what you are saying is really true (especially when we are talking about climate change policies where if we ever do find out who was right, it will be well past the time that punditry is a profitable enterprise.)  But one thing the audience knows is that smart pundits can figure out things that lesser pundits cannot.  That means that the only way a smart pundit can demonstrate to his not-so-smart audience that he is smart is by saying things different than what his lesser colleagues are saying, i.e. to be a contrarian.

Net neutrality refers to a range of principles ensuring non-discriminatory access to the internet.  A particularly contentious principle urges prohibition of “managed” or “tiered” internet service wherein your internet service provider is permitted to restrict or degrade service.  ISPs argue that without such permission they are unable to earn sufficient return on investment in network capacity and would be deterred from making such improvements.

One argument is based on congestion.  Managed service controls congestion, raising the value to users and allowing providers to capture some of this value with access fees.  This is a logical argument and one I will take up in a later post, but here I want to discuss another aspect of managed service:  price discrimination.

Enabling providers to limit access, say by bandwidth caps, opens the door to “tiered” service where users can buy additional bandwidth at higher prices.  This generally raises profits and so we should expect tiered service if net neutrality is abandoned.  What effect does the ability to price discriminate have on an ISP’s incentive to invest in capacity?

It can easily reduce that incentive and this undermines the industry argument against net neutrality.  Here is a simple example to illustrate why.  Suppose there is a small subset of users who have a high willingness to pay for additional bandwidth.  Under net neutrality, all users are charged the same price for access, and none have bandwidth restrictions.  An ISP then has only two choices.  Set a high price and sell only to the high-end users, or set a low price and sell to all users.  When the high-end users are relatively few, profits are maximized with low prices and wide access.  It is reasonable to think of this as describing the present situation.

Suppose tiered access is now allowed.  This gives the ISP a new range of pricing schemes.  The ISP can offer a low-price service plan with a bandwidth cap alongside a high-priced unrestricted plan.  As we vary the cap associated with the low-end plan, we can move along a continuum from no cap at all to a 100% cap.  These two extremes are equivalent to the two price systems available under net neutrality.

Often one of these in-between solutions will be more profitable than either of the two extremes.  The reason is simple.  The bandwidth cap makes the low-end plan less attractive to high-end users and as a result the ISP can raise the price of un-capped access to high-end users.  It’s true that low-end users will pay less for capped service but often the trade-off is favorable to the ISP and total profits increase.

The upshot of this is that total bandwidth is lower, not higher, when an ISP unconstrained by net-neutrality uses the profit-maximizing tiered-service plan.  Couched in the industry’s usual terms, the ISP’s incentive to increase network capacity is in fact reduced by moving away from net neutrality.

(Of course it can just as easily go the other way.  For example, it may be that presently only the high-end users are being served because to lower price enough to attract the low end users, the ISP would lose too much profit from the high-end.  In that case, allowing tiered service would induce the ISP to raise capacity and offer a capped service to previously excluded low-end users without significantly reducing profits from the high-end.  Note however, this is not typically how industry lobbyists frame their argument.)

A city in Taiwan is trying to keep the streets clean by offering cash for collected dog poo.

City officials in Taichung, which has a population of one million, said on Wednesday the environmental protection bureau would give vouchers worth 100 Taiwan dollars ($3) for every kilo of dog poo collected. In areas of the city especially affected, the reward will be for every half-kilo.

In related news, Taichung is witnessing a sudden surge in demand for high-fiber dog food which is now being sold in convenient single-serving sizes priced at 99 Taiwan dollars.

My mother tells me that where she lives there are cameras that will catch you if you don’t come to a complete stop at the octagonal sign.  Your license plates will be photographed and you will be sent a bill in the mail.  The fine is close to $500.  That’s a lot more than I remember it.

Quiz:  suppose the technology improves for detecting whether a violation has taken place.  Should the fine increase, decrease or stay constant?

I have read and heard anecdotal evidence that litigation in the United States is countercyclical.  Usually this is cynically explained by saying that when times are tough everybody is looking to make an extra buck.  But of course everybody is looking to make an extra buck when times are good too.

All of business activity relies on relationships that are partially supported by contracts and partially supported by trust.  Trust fills in the gaps of incomplete contracts.  When the contract is not followed to the letter, your interest in maintaining a healthy relationship smooths things over.

Bad times raise uncertainty about whether there are any gains left from this relationship in the future.  This undermines trust and the result is that the courts are called in to fill the gaps.

There are a couple of natural ways to test this theory.  First the countercyclical nature of litigation should vary across sectors.  Thick markets with relatively anonymous actors should see less impact of economic downturns on the rate of litigation.  Also, the effect outlined above is based on the assumption that contracts are written in good times and litigated in bad times.  If the downturn is expected to last, then new contracts should tend to be more complete, taking into account the increased appetite for litigation.  The result should be less litigation in longer downturns than in shorter ones.

I thank Rosemary for the conversation.

Should punishment depreciate as time passes?  As usual the answer probably depends on whether you think of punishment as justice or as a mechanism to internalize externalities.

I can see how the demands of justice could be reduced and even expire after many years pass.  One view is that identity evolves and eventually the accused is a different person from the criminal of the past and justice is not served by punishing someone who is effectively a third party.

On the other hand, if the purpose is to deter crime then the passage of time should arguably increase the punishment.  What matters is the perceived cost of the act evaluated at the time of acting.  A fixed penalty (possibly) deferred far in the future imposes a smaller cost.  To compensate for the discounting, the size of the penalty must be larger when it begins later.  Its tempting to say that because the time for acting has already passed there is no retroactive incentive effect from extending the punishment.  But this logic would undermine all penalties after the fact.  Indeed, the incentive theory of punishment relies on prosecutors holding to their commitments presumably because of reputational concerns.

Working against this is the incentive effect on prosecutors.  One reading of a statute of limitations is that it compels prosecutors to make reasonably prompt decisions to bring charges.  We can model this by supposing there is a flow cost of maintaining a defense:  keeping track of the whereabouts of witnesses, preserving documents, coordinating the memories of all involved.  The freedom to delay induces prosecutors to optimally impose costs on the innocent in order to maximize chances of conviction.

Presumably the latter is less of a concern when the criminal has already confessed.

Put aside the question of why customers give tips.  That’s certainly a huge mystery but the fact is that many diners give tips and the level of tip depends on the quality of service.

In this article (via foodwire), a restauranteur explains why he decided against a switch to the European system of a fixed service charge.

We looked very hard at this [servis compris] policy fifteen years ago. We were going to call it “hospitality included.” We felt people who worked in the dining room were apologizing for being hospitality professional. I felt there was a resulting shame or lack of pride in their work. My assumption was that it was fueled by the tipping system, and I was troubled by the sense that the that tipping system takes a big part of the compensation decision out of the employer’s hands. So we brought up the “hospitality included” idea to our people. To our surprise, it turned out the staff actually enjoyed working for tips.

The tipping system encourages servers to put more weight on the diner’s welfare than the restauranteur would like, at least at the margin.  You can think of the waiter as selling you extra bread, more wine in your glass, and more attention at the expense of less generous (-looking) diners.  The restauranteur incurs the cost but the server earns the tip.

On the other hand, a fixed service charge provides too little incentive to take care of the customers.  You can think of a tipping system as outsourcing to the diner the job of monitoring the server.

(I once had a conversation on this topic with Toomas Hinnosaar and I am probably unconsciously plagiarising him.)

We talked a lot before about designing a scoring system for sports like tennis.  There is some non-fanciful economics based on such questions.  Suppose you have two candidates for promotion and you want to promote the candidate who is most talented.  You can observe their output but output is a noisy signal that depends not just on talent, but also effort both of which you cannot observe directly.  (Think of them as associates in a law firm.  You see how much they bill but you cannot disentangle hard work from talent.  You must promote one to partner where hard work matters less and talent matters more.)

How do you decide whom to promote?  The question is the same as how to design a scoring system in tennis to maximize the probability that the winner is the one who is most talented.

One aspect of the optimal contest seems clear.  You should let them set the rules.  If a candidate knows he has high ability he should be given the option to offer a handicap to his rival.  Only a truly talented candidate would be willing to offer a handicap.  So if you see that candidate A is willing to offer a higher handicap than candidate B, then you should reward A.

The rub is that you have to reward A, but give B a handicap.  Is it possible to do both?

If you are the owner of a large enterprise and are ready to retire, what do you do?  Sell to the highest bidder.  Before selling, do you want to split your firm into competing divisions and sell them off separately?  No, because, that would introduce competition, reduce market power and lower the bids so the sum total is lower than what you would get for the monopoly.  Searle, the drug company, sold itself off to Monsanto as one unit.

Miguel Angel Felix Gallardo, the Godfather of the Mexican illegal drug industry, lived a peaceful life as a rich monopolist.   Then he was caught in 1989 and decided to sell off his business.  In principle, Gallardo should sell off a monopoly just like Searle.  But he did not (see end of article)  The difference is that property rights are well defined in a legal business so Searle belongs to Monsanto.  But Gallardo can’t commit not to sell the same thing twice as property rights are not well-defined.  There is also considerable secrecy so it’s hard to know if the territory you are buying was already sold to someone else before.  And after you’ve sold one bit for a surplus, you have the incentive to sell of another chuck as you ignore the negative impact of this on the first buyer.

The result is that selling illegal drug turf results in a more competitive market than the ex ante ideal.  As the business is illegal anyhow, all the gangs can shoot it out to capture someone else’s territory.  Exactly what’s happening now.

Tyler Cowen blegs for ideas on the economics of randomized trials. There is a simple and robust insight economic theory has to offer the design of randomized trials: controlling incentives in order to reduce ambiguity in the measurement of effectiveness.

Suppose you are testing a new drug that must be taken on a daily basis. A typical problem is that some patients stop taking the drug but for various reasons do not inform the experimenters. The problem is not the attrition per se because if the attrition rate were known, this could be used to identify the take-up rate and thereby the effectiveness of the drug.

The problem is that without knowing the attrition rate in advance there is no way to independently identify it: the uncertainty about the attrition rate becomes entangled with the uncertainty about the drug’s effectiveness. The experimenters could assume some baseline attrition rate, but when the effectiveness results come out on the high side, there is always the possibility that this is just because the attrition rate for this particular experiment was lower than usual.

The simple way to solve this problem is to use selective trials rather than randomized trials: require patients in the study to pay a price to remain in the study and continue to receive the drug. If the price is high enough, only those patients who actually intend to take the drug will pay the price. Thus the attrition rate can be directly observed by noting which patients continued to pay for the drug. This removes the entanglement and allows statistical identification of the effectiveness of the drug.

This is one of a number of new ideas in a new paper by Sylvain Chassang, Gerard Padro i Miquel and Erik Snowberg.

Followup: Sylvain Chassang points me to two experimental papers that explore/implement similar ideas:

http://www.dartmouth.edu/~jzinman/Papers/OU_dec08.pdf

http://faculty.chicagobooth.edu/jesse.shapiro/research/commit081408.pdf

“We have shown that by applying tools from neuroscience to the public-goods problem, we can get solutions that are significantly better than those that can be obtained without brain data,” says Antonio Rangel, associate professor of economics at Caltech and the paper’s principal investigator.

Here is the paper.  You should read it.  It is forthcoming in Science. Zuchetto Zip goes to Economists’ View.

The public goods aspect of the problem is not important for understanding the main result here, so here is a simplified way to think about it.  You are secretly told a number (in the public goods game this number is your willingness to pay) and you are asked to report your number.  You have a monetary incentive to lie and report a number that is lower than the one you were told. But now you are placed in a brain scanner and told that the brain scanner will collect information that will be fed into an algorithm that will try to guess your number.  And if your report is different from the guess, you will be penalized.

The result is that subjects told the truth about their number.  This is a big deal but it is important to know exactly what the contribution is here.

  1. The researchers have not found a way to read your mind and find out your number.  Indeed, even under the highly controlled experimental conditions where the algorithm knows that your number is one of two possible numbers and after doing 50 treatments per subject and running regressions to improve the algorithm, the prediction made by the algorithm is scarcely better than a random guess.  (See table S3)
  2. In that sense “brain data” is not playing any real role in getting subjects to tell the truth.  Instead, it is the subjects’ belief that the scanner and algorithm will accurately predict their value which induces them to tell the truth.  Indeed after conducting the experiment the researchers could have thrown away all of their brain data and just randomly given out payments and this would not have changed the result as long as the subjects were expecting the brain data to be used.
  3. The subjects were clearly mistaken about how good the algorithm would be at predicting their values.
  4. Therefore, brain scans as incentive mechanisms will have to wait until neuroscientists really come up with a way of reading numbers from your brain.

There is a carefully researched article appearing in the Huffington Post that says yes.

The Federal Reserve’s Board of Governors employs 220 PhD economists and a host of researchers and support staff, according to a Fed spokeswoman. The 12 regional banks employ scores more. (HuffPost placed calls to them but was unable to get exact numbers.) The Fed also doles out millions of dollars in contracts to economists for consulting assignments, papers, presentations, workshops, and that plum gig known as a “visiting scholarship.” A Fed spokeswoman says that exact figures for the number of economists contracted with weren’t available. But, she says, the Federal Reserve spent $389.2 million in 2008 on “monetary and economic policy,” money spent on analysis, research, data gathering, and studies on market structure; $433 million is budgeted for 2009.

All of the facts in this article are true.  Any academic economist sees first-hand the role the Fed has in supporting research in the area of monetary economics.  And it is easy to see how this article could lead an outsider to its conclusions.

Paul Krugman, in Sunday’s New York Times magazine, did his own autopsy of economics, asking “How Did Economists Get It So Wrong?” Krugman concludes that “[e]conomics, as a field, got in trouble because economists were seduced by the vision of a perfect, frictionless market system.”

So who seduced them?

The Fed did it.

I am not a macroeconomist and apart from an occasional free lunch I have never been the beneficiary of Fed research funding, so I easily could be out of the loop on this conspiracy but for what it is worth I don’t see any evidence of it.  All of the facts are true, but the conclusion follows from them only if you want it to.

I am sure it would be easy to compile a large list of papers funded by Fed research money that are critical of Fed monetary policy.

The US Open is here. From the Straight Sets blog, food for thought about the design of a scoring system:

A tennis match is a war of attrition that is won after hundreds of points have been played and perhaps a couple of thousand shots have been struck.On top of that, the scoring system also very much favors even the slightly better player.

“It’s very forgiving,” Richards said. “You can make mistakes and win a game. Lose a set and still win a match.”

Fox said tennis’s scoring system is different because points do not all count the same.

“Let’s say you’re in a very close match and you get extended to set point at 5-4,” Fox said, referring to a best-of-three format. “There may be only four or five points separating you from you opponent in the entire match. And yet, if you win that first set point, you’ve essentially already won half the match. Half the match! And not only that — your opponent goes back to zero. They have to start completely over again. And the same thing happens in every game, not just each set. The loser’s points are completely wiped out. So there are these constant pressure points you’re facing throughout the match.”

There are two levels at which to assess this claim, the statistical effect and the incentive effect.  Statistically, it seems wrong to me.  Compare tennis scoring to basketball scoring, i.e. cumulative scoring.  Suppose the underdog gets lucky early and takes an early lead.  With tennis scoring, there is a chance to consolidate this early advantage by clinching a game or set.  With cumulative scoring, the lucky streak is short-lived because the law of large numbers will most likely eradicate it.

The incentive effect is less clear to me, although my instinct suggests it goes the other way.  Being a better player might mean that you are able to raise your level of play in the crucial points.  We could think of this as having a larger budget of effort to allocate across points.  Then grouped scoring enables the better player to know which points to spend the extra effort on.  This may be what the latter part of the quote is getting at.

A recent article in Wired about increases in the placebo effect over time has provoked much discussion.  Here, for example, is a good counterpoint from Mindhacks.

But let’s assume that placebo is indeed a potentially effective treatment for psychological reasons.  When you are a subject in a placebo-controlled study you are told that the drug you are taking is a placebo with probability p.  Presumably, the magnitude of the placebo effect depends on p, with smaller p implying larger placebo effect.

This means there is a socially optimal p.  That is, if doctors were to prescribe placebo as a part of standard practice, they should do so randomly and with the optimal probability p.  Will they?

No, due to a problem akin to the Tragedy of the Commons.  An individual doctor’s incentive to prescribe placebo is based on trading off the cost and benefits to his own patients.  But the socially optimal placebo rate is based on a trade-off of the benefit to the individual patient versus the cost to the overall population.  That cost arises because everytime a doctor gives placebo to his patient, this raises p and lowers the effectiveness of placebo to all patients.

So doctors will use placebo too often.

We have all heard about problems of overfishing and how quotas and incentive mechanisms have been effective in slowing the depletion of stocks of endangered fish.  But while over-utilization of a common resource can be addressed with such measures, it is trickier to implement schemes that incentivize investment toward actively replenishing depleted fisheries.   The problem is that any actor bears all of the investment cost but, given the common pool, enjoys only a small fraction of the benefits.

Enter Giant Robotic Roaming Fish Farms.  These are essentially mobile fences in the sea that have the potential of bringing the benefits of coastal fish farming to the open waters solving a number of traditional problems.

Traditional fish farms typically consist of cages submerged in shallow, calm waters near shore, where they are protected from the weather and easily accessible for feeding and maintenance.

But raising fish in such close quarters can contribute to the spread of disease among the animals, and wastes may foul the waters. Cages must be moved to keep the waters clean and the fish healthy.

Deepwater cages offer cleaner, more freely circulating ocean water and natural food, which can yield tastier fish.

Fences create property rights and property rights solve incentive problems.  As an illustration, here is a remarkable paper demonstrating the rapid advances in agricultural development in the American plains that coincided with the invention of barbed wire.

Chopped is a show on the Food Network where four chefs compete to win $10K.  There are three knockout rounds/courses.  In each round, the remaining chefs get some mystery ingredients and have 30 minutes to cook four portions of a dish.  One chef is chopped each course by a panel of judges till one remains standing at the end of the dessert round.

In the show I watched tonight, the mystery ingredients in the first round were merguez sausage, broccoli and chives.  Chef Ming from Le Cirque tried to make chive crepes with a sausage and broccoli stuffing and a milk-broccoli stem sauce.  He used a fancy technique where he turned a frying pan upside down and cooked the crepe on the bottom of the pan.  He ran out of time and did not make the sauce.  Crepes turned out crap.  Basically things did not go too well and he was “chopped”.   Far weaker chefs made it to the next round.  But Ming’s strategy was wrong: he was one of the best chefs.  If he had not cooked a hard dish but a safe dish he would have made it into the next round.   This got me thinking about the optimal strategy for the game.  Here is my conjecture.

To win you have to cook at least one “home run” dish and two good dishes.  The third and final final dessert round seems to be the hardest.  This time the mystery ingredients were grape leaves, sesame seeds, pickled ginger and melon!  It was very challenging to make something edible with that, let alone creative and delicious.  If you are lagging (i.e. your opponent has had a home run in previous round and you have not), you have to go for a home run in the dessert round.  Otherwise, just do the best you can: the random choice of ingredients will play a  bigger rle in your success than your own effort.  Reasoning backwards, this implies that you have to go for a home run in one of the first two rounds.

In the second round is where I would try for one.  If the other two are going for home runs, I could still play safety and land in the middle.  I might do this if I already had a home run in the first round.  But if I played safety in the first round, I have to go for it now.  And it is likely that I’m in the latter scenario because in the first round you (at least if you are one of the better chefs) should not go for a home run as the only way you’re going to lose is if you come last out of four people.  Only the most mediocre chef should play a risky strategy in the first round as this is the only way to win (think of the John McCain picking Sarah Palin “Hail Mary Pass” strategy when he was lagging behind).  The other three should produce a nice, safe appetizer.  If they are truly the best three chefs they are likely to make it to the second round in equilibrium anyway.  And all three will have safety dishes.  And all three should go for home runs as the desert round is not a good time to attempt a great dish.

So, Ming did not get the game strategy right and he got knocked out earlier than he should have.   So future contestants take note of this blog entry.  I am also willing to provide consulting for chefs if they cook a free dinner for me.

At Legoland, admission is discounted for two-year-olds. But a child must be at least three for most of the fun attractions.

At the ticket window the parents are asked how old the child is. But at the ride entrance the attendants ask the children directly.

The parents lie. The children tell the truth.

Via MR, this article describes the obstacles to a market for private unemployment insurance.  Why is it not possible to buy an insurance policy that would guarantee your paycheck (or some fraction of it) in the event of unemployment?  The article cites a number of standard sources of insurance market failure but most of these apply also to private health insurance, and other markets and yet those markets function.  So there is a puzzle here.

The main friction is adverse selection.  Individuals have private information about (and control over!) their own likelihood of becoming unemployed.  The policy will be purchased by those who expect that they will become unemployed.  This makes the pool of insured especially risky, forcing the insurer to raise premiums in order to avoid losses. But then the higher premiums causes a selection of even more risky applicants, etc.  This can lead to complete market breakdown.

In the case of unemployment insurance there is a potential solution to this problem which borrows from the idea of instrumental variables in statistics.  (Fans of Freakonomics will recognize this as one of the main tools in the arsenal of Steve Levitt and many empirical economists.)  The idea behind instrumental variables is that it sidesteps a sample selection problem in statistical analysis by conditioning on a variable which is correlated with the one you care about but avoids some additional correlations that you want to isolate away.

The same idea can be used to circumvent an adverse selection problem.  Instead of writing a contract contingent on your employment outcome, the contract can be contingent on the aggregate unemployment rate.  You pay a premium, and you receive an adjustment payment (or stream of payments) when the aggregate unemployment rate in your locale increases above some threshold.

Since the movements in the aggregate unemployment rate are correlated with your own outcome, this is valuable insurance for you.  But, and this is the key benefit, you have no private information about movements in the aggregate unemployment rate.  So there is no adverse selection problem.

The potential difficulty with this is that there will be a lot of correlation in movements in unemployment across locations, and this removes some of the risk-sharing economies typical of insurance.  (With fire insurance, each individual’s outcome is uncorrelated with everyone else, so an insurer of many households faces essentially no risk.)

Millions of internet users who use Skype could be forced to find other ways to make phone calls after parent company eBay said it did not own the underlying technology that powers the service, prompting fears of a shutdown.

Why are there firms?  A more flexible way to manage transactions would be through a system of specific contracts detailing what each individual should produce, to whom it should be delivered and what he should be paid.  It would also be more efficient:  a traditional firm makes some group of individuals the owners and a separate group of individuals the workers.  The firm is saddled with the problem of motivating workers when the profits from their efforts go to the owners.

The problem of course is that most of these contracts would be far too complicated to spell out and enforce.  And without an airtight contract, disputes occur.  Because disputes are inefficient, the disputants almost always find some settlement which supplants the terms of the contract.  Knowing all of this in advance, the contracts would usually turn out to be worthless.  The strategy of bringing spurious objections to existing contracts in order to trigger renegotiation at more favorable terms is called holdup. The holdup problem is considered by some economic theorists to be the fundamental friction that shapes most of economic organization.

Case in point, Skype and eBay.  eBay acquired the Skype brand and much of the software from the founders, JoltId, but did not take full ownership of the core technology, instead entering a licensing agreement which grants Skype exclusive use.  Since that time, Skype has become increasingly popular and a strong source of revenue for eBay.  Now eBay is being held up.  JoltId claims that eBay has violated the licensing agreement, citing a few obscure and relatively minor details in the contract.  Litigation is pending.

Not coincidentally, eBay has publicly stated its intention to spinoff Skype and take it public, a sale that would bring a huge infusion of capital to eBay at a time when it is reinventing its core business.  That sale is in turn being heldup because Skype is worthless without the license from JoltId.  This puts JoltId in an excellent bargaining position to renegotiate for a better share of those spoils. (On the other hand, had Skype not done as well as it did, JoltId would not have such a large share of the downside.)

Whatever were the long-run total expected payments eBay was going to make to JoltId in return for exclusive use of the technology, it should have paid that much to own the technology outright, become an integrated firm, and avoided the holdup problem.

And don’t worry.  You got your Skype.  Holdup may change the terms of trade, but it is in neither party’s interest to destroy a valuable asset.

At Volokh Conspiracy, Ilya Somin writes:

This week, many of my former students will be undergoing the painful experience of taking the Virginia bar exam. My general view on bar exams is that they should be abolished, or at least that you should not be required to pass one in order to practice law. If passing the exam really is an indication of superior or at least adequate legal skills, then clients will choose to hire lawyers who have passed the exam even if passage isn’t required to be a member of the bar. Even if a mandatory bar exam really is necessary, it certainly should not be administered by state bar associations, which have an obvious interest in reducing the number of people who are allowed to join the profession, so as to minimize competition for their existing members.

What changes would we see if it was no longer necessary to pass the bar in order to practice law?  We can analyze this in two steps.  First, hold everything else about the bar exam fixed and ask how the market will react to making it voluntary.

The first effect would be to encourage more entry into the profession.  Going to law school is not as much of a risk if you know that failing the bar is not fatal.  There would be massive entry into specialized law education.  Rather than go to a full-fledged law school, many would take a few practical courses focused on a few services.  Traditional law schools would respond by becoming even more academic and removed from practice.

Eventually the bar will be taken only by high-level lawyers who work in novel areas and whose services require more creativity and less paper pushing.  But the bar will no longer be the binding entry barrier to these areas.  The economic rationale for the entry barrier is to create rents for practicing lawyers so that they have something to lose.  This keeps them honest and makes their clients trust them.

Now reputation will provide these rents. Law firms, even moreso than now, will consist of a few generalist partners who embody all of the reputation of the firm and then an army of worker-attorneys.  All of the rents will go to the partners.  The current path of associate-promoted-to-partner will be restricted to only a very small number of elites.

As a result of all this, competition actually decreases at the high end.

All of these changes will alter the economics of the bar exam itself.  Since the bar is no longer the binding entry barrier, bar associations become essentially for-profit certification intermediaries.   This pushes them either in the direction of becoming more selective, extracting from further increases in rents at the high end or less selective and becoming effectively a driver’s license that everyone passes (and pays a nominal fee.)  Which direction is optimal depends on elasticities.  Probably they will offer separate high-end and low-end exams.

My bottom line is that banning the bar increases welfare but perhaps for different reasons than Somin has in mind.  Routine services will become more competitive and this is good.  Increased concentration at the high end is probably also good because market power means less output and for the kinds of lawyering they do, reduced output is welfare-improving.

To remind you, reCAPTCHA asks you to decipher two smeared words before you can register for, say, a gmail account.  One of the words is being used to test whether you are a human and not a computer.  The reCAPTCHA system knows the right answer for that word and checks whether you get it right.  The reCAPTCHA system doesn’t know the other word and is hoping you will help figure it out.  If you get the test word right, then your answer on the unkown word is assumed to be correct and used in a massive parallel process of digitizing books.  The words are randomly ordered so you cannot know which is the test word.

Once you know this, you many wonder whether you can save yourself time by just filling in the first word and hoping that one is the test word.  You will be right with 50% probability.  And if so, you will cut your time in half.  If you are unlucky, you try again, and you keep on guessing one word until you get lucky.  What is the expected time from using this strategy?

Let’s assume it takes 1 second to type in one word.  If you answer both words you are sure to get through at the cost of 2 seconds of your time.  If you answer one word each time then with probability 1/2 you will pass in 1 second, with probability 1/4 you will pass in 2 seconds, probability 1/8 you pass in 3 seconds, etc.    Then your expected time to pass is

\sum_{t=1}^\infty \frac{t}{2^t}

Is this more or less than 2?  Answer after the jump.

Read the rest of this entry »

A taxi driver has a fixed cost:  he has to get out of bed, get into his cab and start roaming the streets.  He is compensated by a fixed rate per mile.  The combination of these two creates a basic incentive problem which explains a lot of common frustration with cab rides.  In order for the fixed rate to compensate the cab driver for his fixed costs, it must be set above the flow marginal cost of driving.  The implication is that the cab driver always has an incentive to extend your trip longer than is necessary.  And he has an incentive to reject short trips. And they saturate airports but you can’t find them in your neighborhood, etc, etc…

The anti-trust division of the Justice Department and FTC are reviewing potentially anti-competitive practices by the dominant providers of wireless services.  In my previous post on the subject I discussed the theory of exclusive contracts as illegal barriers to entry.  In this post I will take up the conventional argument that an exclusive agreement can spur investment by providing a guaranteed return.

AT&T absorbed significant upfront costs by developing and expanding their 3G network at a time when only the Apple iPhone was capable of using its higher speeds and advanced capabilities.  AT&T and Apple entered into a relationship in which AT&T would be the exclusive provider of 3G wireless services for the iPhone and this guarantees AT&T a stream of revenue which would eventually recoup their investment and turn a profit.  If this exclusive contract were to be scrutinized by anti-trust authorities, AT&T could be expected to argue that without protection from future competition these revenues would not be guaranteed and they would not have been able to make the investment in the first place.

Putting this argument in its proper light requires paying close attention to the distinction between total profits and incentives at the margin.  To justify an exclusive contract on efficiency grounds it is not enough to show that exclusivity raises total profits, it must be shown that in addition it adds to the marginal incentives to invest in the new technology.

Imagine that AT&T has no contract with Apple.  The worry is that a competitor will develop a rival 3G network and compete with AT&T for Apple’s business.  If this happens, AT&T is left out in the cold and makes a loss on its investment.  On the other hand, if AT&T has a contract to be the exclusive iPhone 3G provider, then Apple cannot unilateraly break this contract and deal with the new entrant.  Of course if the new provider was a more attractive partner, perhaps because of lower costs or a better technology, Apple could try to buy out of the contract, but AT&T would not accept any payment less than what it would get from insisting on the exclusive contract.

Thus, with an exclusive contract, when a competitor appears AT&T is guaranteed a minimal payoff equal to the total revenue it would earn if it rejected any buyout and insisted on the exclusive deal.  This is the basis of the conventional intuition supporting exclusive dealing.  But what exactly determines this payoff?

The key to understanding this is to consider that once the contract is in place and AT&T’s investment is sunk, the two parties are in a situation of bilateral monopoly.  There is some total surplus that will be generated from their mutual agreement and this surplus will be divided between the two through some bargaining.  The exclusive contract determines the status quo from which they will bargain and the amount of surplus to divided is the gain from Apple switching to the new rival. Investment by AT&T improves the value to Apple from dealing with AT&T and while this raises AT&T’s status quo it also reduces the gain to switching to the new rival, and hence the bargaining surplus, by exactly the same amount.  In the resulting bilateral monopoly bargaining, these effects exactly counteract one another and the net result is that the contract adds nothing to AT&T’s marginal incentives to invest.

This is the insight of Segal and Whinston in their RAND paper “Eclusive Contracts and Protection of Investment.”

Ultimately, an exclusive contract only shifts surplus to the investing party in a lump sum, independent of the level of investment.  There are two implications of this.

  1. It cannot be argued that exclusive contracts are necessary for protection of investments.  The shifting of surplus could be just as easily achieved by replacing the exclusive contract with a lump-sum cash payment to AT&T.
  2. However, the argument described here cannot be the decisive plank in any anti-trust litigation.  If an anti-trust investigation were to go forward, AT&T/Apple could argue that instead of using the lump-sum payment (which may have been complicated if the size of the payment required is large) they chose to use an exclusive contract to do the surplus shifting.   That is, just noticing that exclusive contracts are not necessary, does not imply that they are not useful.  At best, there would have to be a finding that the exclusive contract had some other anti-competitive intent, and the arguments here would just be used to disarm any defense on the basis of necessity.

He writes the blog Game Theorist and he is the author of the book Parentonomics.  Here he is on the BBC sharing his wisdom on potty training and peas.  (About 2/3 of the way in.)

Consider a hierarchical organization which promotes to level n+1 the most competent worker in level n.  In the organization’s steady state the workers will be sorted into the jobs where they are least competent.   (Porkpie ping:  Mindhacks)

After showing how the Vickrey auction efficiently allocates a private good we revisit some of the other social choice problems discussed at the beginning and speculate how to extend the Vickrey logic to those problems.  We look at the auction with externalities and see how the rules of the Vickrey auction can be modified to achieve efficiency.  At first the modification seems strange, but then we see a theme emerge.  Agents should pay the negative externalities they impose on the rest of society (and receive payment in compensation for the postive externalities.

We distill this idea into a general formula which measures these externalities and define a transfer function according to that formula.  The resulting efficient mechanism is called the Vickrey-Clarke-Groves mechanism.  We show that the VCG mechanism is dominant-strategy incentive compatible and we show how it works in a few examples.

We conclude by returning to the roomate/espresso machine example.  Here we explicitly calculate the contributions each roomate should make when the espresso machine is purchased.  We remind ourselves of the constraint that the total contributions should cover the cost of the machine and we see that the VCG mechanism falls short.  Next we show that in fact the VCG mechanism is the only dominant-strategy efficient mechanism for this problem and arrive at this lecture’s punch line.

There is no efficient, budget-balanced, dominant-strategy mechanism.

Here are the slides.

This article (wig wiggle: The Browser) discusses various ways the Chinese judicial system differs from Western courts.  One significant difference is summarized by Columbia Law Professor Benjamin Liebman:

Yet China’s courts are as deeply committed to populism as they are to professionalism. If Chinese judges decide to ignore a law in order to preserve thousands of jobs, they aren’t violating a sacred legal precept. “They’re supposed to take into account popular interests,” Liebman explains.

The article presents this in a way that presupposes that it will be obvious to us that this is a bad approach to judging.  Indeed, public debate in the US about “judicial philosophy” also takes for granted that judges should base their opinions on the law, and not on popular opinion. But why is this so obvious?  Why shouldn’t the job of a judge be to decide on a case-by-case basis what is in the public interest?

Put aside the obvious reasons.  Popular opinion may be hard to read and political voice may not be equally allocated.  Judges are administering justice, especially for those without political voice.  Popular opinion may be short-sighted and judges are expected to be immune to short-run pressures and make decisions with better long-run consequences.

But even in cases where it is transparent and uncontroversial what the public interest is and there is no short-run/long-run trade-off, judges still should not decide cases on that basis alone.  In fact, one of the most important functions of the court is to act against the public interest.  Because incentives to make good decisions typically require that we expect a bad outcome if instead we make bad decisions.  And ex post that bad outcome is typically not in the public interest.  A court that is committed to uphold the law and act against the public interest ex post advances the public interest ex ante.

We will take a first glimpse at applying game theory to confront the incentive problem and understand the design of efficient mechanisms.  The simplest starting point is the efficient allocation of a single object.  In this lecture we look at efficient auctions.  I start with a straw-man:  the first-price sealed bid auction.  This is intended to provoke discussion and get the class to think about the strategic issues bidders face in an auction.  The discussion reaches the conclusion that there is no dominant strategy in a first-price auction and it is hard to predict bidders’ behavior.  For this reason it is easy to imagine a bidder with a high value being outbid by a bidder with a low value and this is inefficient.

The key problem with the first-price auction is that bidders have an incentive to bid less than their value to minimize their payment, but this creates a tricky trade-off as lower bids also mean an increased chance of losing altogether.  With this observation we turn to the second-price auction which clearly removes this trade-off altogether.  On the other hand it seems crazy on its face:  if bidders don’t have to put their money whether mouths are won’t they now want to go in the other direction and raise their bid above their value?

We prove that it is a dominant strategy to bid your value in a second-price auction and that the auction is therefore an efficient mechanism in this setting.

Next we explore some of the limitations of this result.  We look at externalities:  it matters not just whether I get the good, but also who else gets it in the event that I don’t.  We see that a second-price auction is not efficient anymore.  And we look at a setting with common values:  information about the object’s value is dispersed among the bidders.

For the comon-value setting I do a classroom experiment where I auction an unknown amount of cash.  The amount up for sale is equal to the average of the numbers on 10 cards that I have handed out to 10 volunteers.  Each volunteer sees only his own card and then bids.  If the experiment works (it doesnt always work) then we should see the winner’s curse in action:  the winner will typically be the person holding the highest number, and bidding something close to that number will lose money as the average is certainly lower.

Here are the slides.

(I got the idea from the winner’s curse experiment from Ben Polak, who auctions a jar of coins in his game theory class at Yale.  Here is a video. Here is the full set of Ben Polak’s game theory lectures on video.  They are really outstanding.  Northwestern should have a program like this.  All Universities should.)

Evolutionary Psychology and, increasingly, behavioral economics spin a lot of intriguing stories explaining foibles and otherwise mysterious behaviors as the byproduct of various tricks nature utilizes to get us to do her bidding.  I am on record in this blog as being a fan of this methodology.  But I also maintain a healthy skepticism and not just at the tendency to concoct “just-so” stories that often ask us to reformulate our theories of huge chunks of evolutionary history just to explain some nano-economic peculiarity.

Instead, when evaluating some theory of how emotions have evolved to induce us to behave in certain ways, skepticism should be aimed squarely at the basic premise.  The theory must come with a convincing explanation why nature would rely on a blunt instrument like emotions as opposed to all of the other tools at her disposal.  These questions seemed especially pressing when I read the following article about depression as a tool to blunt ambitions:

Dr Nesse’s hypothesis is that, as pain stops you doing damaging physical things, so low mood stops you doing damaging mental ones—in particular, pursuing unreachable goals. Pursuing such goals is a waste of energy and resources. Therefore, he argues, there is likely to be an evolved mechanism that identifies certain goals as unattainable and inhibits their pursuit—and he believes that low mood is at least part of that mechanism.

Why not a simpler mechanism:  just have us figure out that the goal is unattainable and (happily) go do something else? Don’t answer by saying that this emotional incentive mechanism evolved before our brains were advanced enough to do the calculation because the existence of an emotional response indicating the right course of action presupposes that this calculation is being made somewhere in the system.

Even granting that nature finds it convenient to do the calculation sub-(or un-)consciously and then communicate only the results to us, why using emotions?  Plants respond to incentives in the environment and they don’t need emotions to do it, presumably they are just programmed to change their “behavior” when conditions dictate.  Why would nature bother with such a messy, noisy, and indirect system of incentives rather than just give us neutral impulses?

Finally, you could try answering with the argument that evolution does not find optimal solutions, just solutions that work.  But that argument by itself can be made into a defense of everything and we are back to just-so stories.

Wimbledon, which has just gotten underway today, is a seeded tournament, like all major tennis events and other elimination tournaments.  Competitors are ranked according to strength and placed into the elimination bracket in a way that matches the strongest against the weakest.  For example, seeding is designed so that when the quarter-finals are reached, the top seed (the strongest player)  will face the 8th seed, the 2nd seed will face the 7th seed, etc.   From the blog Straight Sets:

When Rafael Nadal withdrew from Wimbledon on Friday, there was a reshuffling of the seeds that may have raised a few eyebrows. Here is how it was explained on Wimbledon.org:

The hole at the top of the men’s draw left by Nadal will be filled by the fifth seed, Juan Martin del Potro. Del Potro’s place will be taken by the 17th seed James Blake of the USA. The next to be seeded, Nicolas Kiefer moves to line 56 to take Blake’s position as the 33rd seed. Thiago Alves takes Kiefer’s position on line 61 and is a lucky loser.

Was this simply Wimbledon tweaking the draw at their whim or was there some method to the madness?

Presumably tournaments are seeded in order to make them as exciting as possible for the spectators.  One plausible goal is to maximize the chances that the top two players meet in the final, since viewership peaks considerably for the final.  But the standard seeding is not obviously the optimal one for this objective:  it makes it easy for the top seed to make the final but hard for the second seed.  Switching the positions of the top ranked and second ranked players might increase the chances of having a 1-2 final.

You would also expect that early round matches would be more competitive.  Competitiveness in contests, like tennis matches, is determined by the relative strength of the opponents.  Switching the position of 1 and 2 would even out the matches played by the top player at the expense of unbalancing the matches played by the second player, the average balance across matches would be unchanged.  If effort is concave in the relative strength of the opponents then the total effect would be to increase competitiveness.

When you start thinking about the game theory of tournaments, your first thought is what has Benny Moldovanu said on the subject.  And sure enough, google turns up this paper by Groh, Moldovanu, Sela, and Sunde which seems to have all the answers.  Incidentally, Benny will be visiting Northwestern next fall and I expect that he will be bringing his tennis racket…