You are currently browsing jeff’s articles.

After showing how the Vickrey auction efficiently allocates a private good we revisit some of the other social choice problems discussed at the beginning and speculate how to extend the Vickrey logic to those problems.  We look at the auction with externalities and see how the rules of the Vickrey auction can be modified to achieve efficiency.  At first the modification seems strange, but then we see a theme emerge.  Agents should pay the negative externalities they impose on the rest of society (and receive payment in compensation for the postive externalities.

We distill this idea into a general formula which measures these externalities and define a transfer function according to that formula.  The resulting efficient mechanism is called the Vickrey-Clarke-Groves mechanism.  We show that the VCG mechanism is dominant-strategy incentive compatible and we show how it works in a few examples.

We conclude by returning to the roomate/espresso machine example.  Here we explicitly calculate the contributions each roomate should make when the espresso machine is purchased.  We remind ourselves of the constraint that the total contributions should cover the cost of the machine and we see that the VCG mechanism falls short.  Next we show that in fact the VCG mechanism is the only dominant-strategy efficient mechanism for this problem and arrive at this lecture’s punch line.

There is no efficient, budget-balanced, dominant-strategy mechanism.

Here are the slides.

The Unbundled Economy: It’s one of the implications of (my guess at, given that I haven’t actually read) Free, its apparently what Tyler Cowen is talking about in his new book.  As the price of transmitting small chunks of information crashes to zero, the efficient market structure no longer involves assembly and sale of bundles of chunks, but instead sale of the chunks themselves and after-market assembly.

Case in point, the porn industry (which is pretty much always at the leading edge of structural change.)

Vivid, one of the most prominent pornography studios, makes 60 films a year. Three years ago, almost all of them were feature-length films with story lines. Today, more than half are a series of sex scenes, loosely connected by some thread — “vignettes” in the industry vernacular — that can be presented separately online. Other major studios are making similar shifts.

In lieu of plot, there are themes. Among the new releases from New Sensations, a studio that makes 24 movies a month, is “Girls ’n Glasses,” made up of scenes of women having sex while wearing glasses.

But old habits die hard, even in the porn world:

“The feature is not as big a part of the industry today,” Mr. Orenstein said. But he says he still plans two to three bigger-budget releases each year, including the recently shot “2040,” which is about the pornography business of the future. Mr. Orenstein described the movie as “an almost Romeo-and-Juliet story between an aging porn star and a cyborg.”

As a part of a broader revival of Section 2 of the Sherman Act, the anti-trust division of the Department of Justice, under Obama appointee Christine Varney, has opened a review of potentially anti-competitive practices by the dominant telcom providers.  One specific issue that has received attention is exclusionary contracting between wireless carriers (AT&T) and handset manufacturers (Apple iPhone.) The FTC is reportedly also exploring these contracts.  Exclusive contracts bind a manufacturer’s handsets to specific carriers thereby hindering or preventing end-users from migrating to other carriers.  The widespread nature of these contracts may create a barrier against entry by new, smaller wireless providers who cannot offer their users handsets that compete with the top models.

The review is reported to be at an early stage and may not lead to a formal investigation, but as this develops there are a few basic economic arguments to keep in mind.  To start with, there is the benchmark “Chicago School” view which starts with the observation that exclusionary contracts require the voluntary agreement of the handset manufacturers.  The manufacturers internalize the costs of the entry barrier because without entry they will have fewer competitive carriers to sell their phones to.  Therefore, exclusive contracts must compensate manufacturers for this loss impying that these contracts will be in place only when the total surplus from exclusion exceeds the cost, i.e. when it is efficient. The Chicago argument is a longstanding pillar of regulatory policy that still holds sway today.  From the article:

Jon Muleta, former wireless bureau chief of the FCC, said exclusive handset deals won’t be an issue the government can pursue on antitrust grounds unless major handset makers say they’re being forced into the deals. “The equipment providers enter into these deals willingly,” Mr. Muleta said.

The Chicago argument ignores the costs to end users from reduced competition in wireless service.  It would apply only if manufacturers internalize all of the benefits to consumers from increased competition. But under any reasonable model of the wireless market structure, end-user consumer surplus would increase with more competition for wireless service and this becomes an externality relative to the parties in the Chicago bargain.

Secondly, the Chicago argument has been discredited as it takes a naive view of the way contract negotiation would work.  Implicitly, the Chicago argument assumes that handset manufacturers must be compensated at least what they would earn if entry were to occur.  But scale economies imply that a new carrier will enter only if sufficiently many, or sufficiently large, manufacturers remain free of exclusive deals.  The dominant carriers can use a “divide and conquer” strategy which exploits the difficulty for handset manufacturers to coordinate severing their exclusive deals.  Without this coordinated threat, manufacturers cannot extract the compensation envisioned in the Chicago argument, and again efficiency breaks down.

The definitive references here are Rasmusen, Rasmeyer,  and Wiley “Naked Exclusion” and a follow-on comment by Segal and Whinston, both in the American Economic Review.

There is a separate defense of exclusive contracts, often cited and also reflected in the article.

Paul Roth, AT&T’s president of retail sales and service, told Congress last month that the billions of dollars the company invests in its network and services would be put at risk if government were to “impose intrusive restrictions on these services or the way that service providers and manufacturers collaborate on next-generation devices.” Mr. Roth said there is plenty of competition and innovation in the wireless industry.

AT&T’s tremendous investment in its 3G network will pay off only because of its exclusive deal with Apple to market the iPhone.  Thus, it is often argued that exclusive contracts are in fact pro-competitive as they reward investment with profits that would otherwise be subject to hold-up or competed away.  I will take up this argument in a subsequent post.

Love that sushi from Popeye’s.  (toque tilt:  kottke.)

Here is an excellent example of social choice paradoxes in practice:  the voting system for the Olympic Venue.  The article illustrates cycles, failure of unanimity and violations of independence of irrelevant alternatives.  A great teaching aid, I will certainly be using it next time I teach my intermediate micro course.  I thank Taresh Batra for the pointer.

By the way, there is another perfect social choice example from the olympics.  In the 2002 women’s figure skating competition, Michelle Kwan was leading Sarah Hughes when the final skater, Irina Slutskaya took the ice.  Slutskaya put in a sub-par performance which was nevertheless good enough to surpass Kwan.  But the real suprise was that this performance by Slutskaya reversed the ranking of Kwan and Hughes so that Hughes leaped ahead of both Kwan and Slutskaya and won the gold.  In the end, Hughes took the gold, Slutskaya took silver, and Kwan went home with the bronze medal.  Here is a an old story.

This article (wig wiggle: The Browser) discusses various ways the Chinese judicial system differs from Western courts.  One significant difference is summarized by Columbia Law Professor Benjamin Liebman:

Yet China’s courts are as deeply committed to populism as they are to professionalism. If Chinese judges decide to ignore a law in order to preserve thousands of jobs, they aren’t violating a sacred legal precept. “They’re supposed to take into account popular interests,” Liebman explains.

The article presents this in a way that presupposes that it will be obvious to us that this is a bad approach to judging.  Indeed, public debate in the US about “judicial philosophy” also takes for granted that judges should base their opinions on the law, and not on popular opinion. But why is this so obvious?  Why shouldn’t the job of a judge be to decide on a case-by-case basis what is in the public interest?

Put aside the obvious reasons.  Popular opinion may be hard to read and political voice may not be equally allocated.  Judges are administering justice, especially for those without political voice.  Popular opinion may be short-sighted and judges are expected to be immune to short-run pressures and make decisions with better long-run consequences.

But even in cases where it is transparent and uncontroversial what the public interest is and there is no short-run/long-run trade-off, judges still should not decide cases on that basis alone.  In fact, one of the most important functions of the court is to act against the public interest.  Because incentives to make good decisions typically require that we expect a bad outcome if instead we make bad decisions.  And ex post that bad outcome is typically not in the public interest.  A court that is committed to uphold the law and act against the public interest ex post advances the public interest ex ante.

I enjoyed this article in the Boston Globe which surveys a variety of theories for the (mostly anectodal) tendency for the most vocal moralizers to be the most prone to vice.  When you read an article like this you have to start with simple null hypothesis that, other things equal, making a person more concerned about moral behavior will make them inclined to act morally.  Many of the stories in this article are tempting, mostly because we want to hate hypocrites, but ultimately don’t put up a good counterargument to this benchmark view.  However the following excerpt is more subtle and in my opinion the most robust story offered.

When asked about the phenomenon of the hypocritical moralizer, psychologists will often point to “projection,” an idea inherited from Freud. What it means – and there is a large literature to back it up – is that if someone is fixated on a particular worry or goal, they assume that everyone else is driven by that same worry or goal. Someone who covets his neighbor’s wife, in other words, would tend, rightly or wrongly, to see wife-coveting as a widespread phenomenon, and if that person were a politician or preacher, he might spend a lot of his time spreading the word about the dangers of adultery.

There has been a run on one of the largest banks in an economics-themed online role-playing game called Eve.  The event merited an article at the BBC.  The run was triggered when Ricdic, an executive of the bank made off with a large sum of virtual lucre and exchanged it for real-world cash.

Eve Online has about 300,000 players all of whom inhabit the same online universe. The game revolves around trade, mining asteroids and the efforts of different player-controlled corporations to take control of swathes of virtual space.

It has now emerged that Ricdic used the cash to put down a deposit on a house and to pay medical bills.

“I’m not proud of it at all, that’s why I didn’t brag about it,” Ricdic told Reuters. “But you know, if I had to do it again, I probably would’ve chosen the same path based on the same situation.”

Apparently, the bank had tremendous reserves and has so far withstood the run.  Here is more information.  Either real-world bank regulators have something to learn from Eve or the other way around because here is Ricdic’s comeuppance:

Ricdic has now been thrown out of the game as trading in-game cash for real money is against Eve Online’s terms and conditions.

The rules governing play within Eve would not have sanctioned Ricdic if he had simply stolen the cash and used it in the game, nor if he had bought kredits with real dollars.

Fedora Flourish:  BoingBoing

A few years ago, we had breakfast at Sandeep’s house and he made us a delicious breakfast.  One of the dishes was a strange egg and tortilla chip creation that everybody loved.  I got the recipe from Sandeep and it has become part of our regular rotation ever since.  We never knew what to call it so in my house it has always been known as Sandeep’s Special Breakfast.  (I since had breakfast in Mexico City and I noticed a resemblance to something called Chilaquiles, so I guess that is what it must be.)  It’s extremely simple to make and super yummy.  It works great for lunch or dinner too.

1 yellow onion, sliced into wedges.

1 bag of Whole Foods restaurant style tortilla chips.  (Whole Foods is not important but you want the chips that are made from tortillas, not the denser chips that are made directly from masa.)

8 eggs, lightly beaten

1 jar of tomatillo salsa.  Anything is fine here, but this stuff called Xochitl

works really well.  It has a smoky flavor that makes your Special Breakfast extra special. You can get it at Chicago-area Whole Foods.

Over high heat, add olive oil to a large saute pan and saute/fry the onions.  You want them to brown and then soften a little.  Turn down the heat and add a few handfuls of tortilla chips to the pan, breaking them with your hands into medium sized pieces.  Toss them around in the pan to get them coated with the oil.  Then add the eggs.  Let them sit in the pan for a minute to cook a bit and then break the whole mass up and turn it over to cook some more.  When the eggs are not quite completely done, pour in some of the salsa.  The right quantity is something you figure out form experience.  It should not be drenched in salsa.  If the salsa is watery you should raise the heat and cook off some of the water.

You are done.  It looks something like this on the plate.

IMG_7823_2

(Until you serve the plate that is.  Not long after that the plate is empty.)

Via Robin Goldstein, the work of Coco Krumme who analyzed wine reviews and classified words according to whether they are typically used to describe expensive or inexpensive wines.

She found that “about 65% of commonly occurring words are non-overlapping.” Words like “old,” “elegant,” “intense,” “supple,” “velvety,” “smoky,” “tobacco,” and “chocolate” predict expensive wines; “pleasing,” “refreshing,” “value,” “enjoy,” “bright,” “light,” “fresh, “tropical,” “pink,” “fruity,” “good,” “clean,” “tasty,” and “juicy” predict cheap wines. As for suggested pairings, “steak” and “shellfish” predict expensive wines; “chicken” predicts cheap wines.

As Robin points out it matters whether the reviews were based on blind tastings.  If so, then the choice of word is in response to the taste of the wine and the correlation with price just tells us which words reviewers use to convey good taste.  (Assuming you think that price is correlated with taste.)  If the tastings were not blind then it is more likely that reviewers are responding to the label and are choosing words in response to the price.

Compared to non-fiction.  Co-authorships leverage specialization.  Certainly there are heterogeneous strengths in fiction writing and this should create gains from collaboration.  But we don’t see it.  I can’t think of any great work of fiction that was co-authored.  There must be a good reason.

  1. Writing style is crucial in fiction.  Multiple voices would make the work feel disjointed.  They could try to collaborate on the writing process and together create one voice but maybe this puts too much of a drag on the creative process.
  2. Still, there are some who are good at imagining plots and characters and others who excel at the stage of actually writing once the idea has been conceived.  Why don’t we see this kind of partnership?
  3. I bet there are great partnerships like this but we never know it because the partners agree to a single nom de plume.

My bottom line is that, ironically, the attraction of great fiction is a connection with the author.  When we read beautiful prose or get turned on by an ingenious plot twist, we think of the author and we enjoy being close to the mind that created it.  Multiple authors would confuse and dillute this feeling.

Jonah Lehrer illustrates a common misunderstanding of (im)probability.  He writes:

It’s been a hotly debated scientific question for decades: was Joe DiMaggio’s 56-game hitting streak a genuine statistical outlier, or is it an expected statistical aberration, given the long history of major league baseball?

He is referring to the observation that 56-game hitting streaks while intuitively improbable will nevertheless happen when the game has been around for long enough.  Does this make it less of a feat?

  1. Say I have a monkey banging on a keyboard.  Take any seqeunce of letters.  The chance that the monkey will bang out that particular sequence is impossibly small.  But one sequence will be produced.  When we see that sequence produced do we change our minds and say that’t not so surprising after all because there was certain to be one unlikely sequence produced?  No.  Similarly, the chance that somebody will hit safely in 56 straight games could be high, but the chance that it will be player X is small.  Indeed, that probability is equal to the probability that player X is the greatest streak hitter ever to play the game.  So if X turns out to be Joe DiMaggio then we conclude that Joe DiMaggio indeed accomoplished quite a feat.
  2. We might be asking a different question.  We grant that DiMaggio achieved the highly improbable and hit for the longest streak of any player in history, but we ask whether 56 is really all that long?  After all, he didn’t hit for 57, which is even less likely.  To address this question we might ask, on average, how many players “should” hit safely in 56 straight games in the time that the game has been around?  But this question is very easy to answer.  Our best estimate of the expected number of players to hit 56-game streaks is 1, the actual number.  (Because the number is close to zero, this estimate is noisy, but this is still the best estimate without making any assumptions about the underlying distribution.)

In case you have not been following the catfight, let me get you up to speed.  Chris Anderson wrote a book called Free.  I haven’t read it, but it apparently says “all your ideas are belong to us” because the price of ideas is crashing to zero.  Malcom Gladwell says “please don’t let my employer read that”…I mean, “No its not.”

Let’s have a model.  There are tiny ideas and big ideas.  The tiny ideas are more like facts, or observations or experiences.  They are costless to produce but costly to communicate.  They are highly decentralized in that everybody produces their own heterogenous tiny ideas.  The big ideas are assembled from a large quantity of tiny ideas.  Different people have different production technologies for producing big ideas from small ones.  These could differ just in cost, or also in terms of the quality of big ideas that are produced, it changes the story a little but doesn’t change the economics.

Start with a world where the marginal cost of communicating a tiny idea to another individual is large.  Then the equilibrium market structure has big-idea producers who incur the high cost of acquiring tiny ideas, assemble them into big ideas and communicate the big ideas to the masses for a price.  This market structure sustains high prices for big ideas and sustains entry by big-idea specialists.

Now suppose the marginal cost of communicating the tiny ideas shrinks to zero.  Then an alternative for end users is to assemble their own big ideas for their own consumption out of the tiny ideas they acquire themselves for close to nothing.  The cost disadvantage that the typical end user has is compensated by his ability to customize his palette of tiny ideas and resulting big ideas to complement his idiosyncratic endowment of other ideas, tastes, etc.   The price of big ideas crashes.  Former producers of big ideas exit the market.  This is all efficient.

An important implication of this model is that the products that Anderson expects to be free are not the products Gladwell produces.  So when Gladwell says that this is absurd because the economics do not support big ideas being sold at a price of zero, he is right.  But that is because the big ideas are not being sold at all, and this is all efficient.

Our colleagues, Eran Shmaya and Rakesh Vohra have started a blog, The Leisure of the Theory Class.  Only three posts so far, but it promises to be a feast of Gale-Stewart games, and gossip.  I look forward to them making fun of Sandeep too.

Not Exactly Rocket Science describes an experiment in which vervet monkeys are observed to trade grooming favors for fruit.  At first one of the monkeys had an exclusive endowment of fruit and earned a monopoly price.  Next, competition was introduced.  The endowment was now equally divided between two duopolist monkeys and as a result the price in terms of willingness-to-groom dropped.

Now, were the monkeys playing Cournot (marginal cost equals residual marginal revenue) or Bertrand (price equals marginal cost)?  (The marginal cost of trading an apple for a grooming session is the opportunity cost of not eating it.)  We need another treatment with three sellers to know.  If the price falls even further then its Cournot.  In Bertrand the price hits the competitive point already with just two.

Intermediate micro question:  Can Monkey #1 increase his profits by buying the apples from Monkey #2 at the equilibrium price and then acting as a monopolist?

At the blog Everything Finance, Jonathan Parker breaks down the implications of the State of California issuing IOUs to rollover its debts, essentially creating a new currency whose value is pegged to the US Dollar.  He makes a number of interesting points including the observation that since California cannot print Dollars, and cannot issue (conventional) debt, the IOUs place the State in a predicament reminiscent of financially-distressed countries having to defend a pegged exchange rate.

And unfortunately, the history of fixed exchange rates in practice includes lots and lots of these effective defaults.  Governments that can issue these i.o.u.’s and have trouble balancing budgets tend to issue a greater value of their currencies than they have the will or ability to maintain.  And default follows.

Prior to “maturity” will these IOUs trade at some market price reflecting the probability of default?  One question is whether banks will be interested in buying IOUs, offering liquidity in return for the asset and a premium?  The strategic issue is whether politically the State will find it more or less attractive to default if the IOUs are still largely held by private citizens, or instead mostly by banks?

My guess is that, in a crisis, a small number of banks would more effectively pressure the State to meet their obligations than if IOU holdings were less concentrated.  If so, then I would expect banks to be buying IOUs at a steep discount.  But does this create a Grossman-Hart style free-rider problem analogous to tendering shares in takeover bids?

Apparently we have arrived at the long run and we are not dead.

Do you remember the Microsoft anti-trust case?  The anti-trust division of the US Department of Justice sought the breakup of Microsoft for anti-competitive practices mostly centering around integrating Internet Explorer into the Windows operating system.  In fact, an initial ruling found Microsoft in violation of an agreement not to tie new software products into Windows and mandated a breakup, separating the operating systems business from the software applications business.  This ruling was overturned on appeal and evnetually the case was settled with an agreement that imposed no further restrictions on Microsoft’s ability to bundle software but did require Microsoft to share APIs with third-party developers for a 5 year period.

Today, all of the players in that case are mostly irrelevant.  AOL, Netscape, Redhat.  Java.  Indeed, Microsoft itself is close to irrelevance in the sense that any attempt today at exploiting its operating system market power to extend its monopoly would cause at most a short-run adjustment period before it would be ignored.

Microsoft was arguing at the time that it was constantly innovating to maintain its market position and it was impossible to predict from where the next threat to its dominance would appear.  Whether or not the first part of their claim was true, the second part certainly turned out to be so.  It is hard to see a credible case that the Microsoft anti-trust investigation, trial, and settlement played anything more than a negligible role in bringing us to this point.  Indeed the considerations there, focusing on the internals of the operating system and contracts with hardware manufacturers, are orthogonal to developments in the market since then.  The operating system is a client and today clients are perfect substitutes.  The rents go to servers and servers live on the internet unconstrained by any “platform” or “network effects”, indeed creating their own.

The lesson of this experience is that in a rapidly changing landscape, intervention can wait.  Even intervention that looks urgent at the time.  Almost certainly the unexpected will happen that will change everything.

I read news mostly through an rss reader.  The Wall Street Journal syndicates only short excerpts of their articles and if I click through I get a truncated version of the article follwed by a friendly invitation to subscribe to the journal in order to view the rest of the article.  It looks like this.

But its not hard to get the full text of the article.  I just use google and type in the title of the article.  The first link I get is a link to the full text, no subscription required.  I always explained this to myself using a simple market-segmentation idea.  WSJ will not give their content away to someone who is browsing their site directly because that person has revealed a high value for WSJ content.  Someone who is googling has revealed that they are looking for relevant content, without regard to source.  There is more competition for such a user so the price is lower.

But today I noticed that bing, Microsoft’s new search engine, does not get the same special treatment.  If I bing “At Chicken Plant, A Recession Battle,” the link provided leads to the same truncated article as my rss reader.  Since users have free entry across search platforms I can’t see any reason why bing-searchers (bingers?) would be systematically different than googlers in terms of the economics above.  Therefore I am giving up on my theory.  What are the alternatives?

  1. Google has a contract with WSJ?
  2. WSJ would like to shut out googlers too but finds it hard to shut off a service that users have come to expect. Knowing this, they are keeping bingers out from the outset.
  3. The game between content providers has multiple equilibria.  On the google platform they are playing the users’ preferred equilibrium.  On the bing platform they have coordinated on their preferred equilibrium.
  4. Google has figured out a secret back-door that bing hasn’t found and WSJ just hasn’t gotten around to closing.

Ok the ideas are gettng more and more lame.  I am stumped.

Incidentally, there was an article in the New York Times about DOJ investigations of Google, and a Google PR offensive:

“Competition is a click away,” Mr. Wagner says. It’s part of a stump speech he has given in Silicon Valley, New York and Washington for the last few months to reporters, legal scholars, Congressional staff members, industry groups and anybody else who might influence public opinion about Google.

“We are in an industry that is subject to disruption and we can’t take anything for granted,” he adds.

Rings a bell.

I collect kludges.  Its an especially welcome addition to the collection when it involves a tasty snack:

DunceCap Doff:  There I Fixed It.

Top chess players, until recently, held their own against even the most powerful chess playing computers.  These machines could calculate far deeper than their human opponents and yet the humans claimed an advantage:  intuition.  A computer searches a huge number of positions and then finds the best.  For an experienced human chess player, the good moves “suggest themselves.”  How that is possible is presumably a very important mystery, but I wonder how one could demonstrate that qualitatively the thought process is different.

Having been somewhat obsessed recently with Scrabble, I thought of the following experiment.  Suppose we write a computer program that tries to create words from scrabble tiles using a simple brute-force method.  The computer has a database of words.  It randomly combines letters and checks whether the result is in its database and outputs the most valuable word it can identify in a fixed length of time.  Now consider a contest between to computers programmed in the same way which differ only in the size of their database, the first knowing a subset of the words known by the second.  The task is to come up with the best word from a fixed number of tiles.  Clearly the second would do better, but I am interested in how the advantage varies with the number of tiles. Presumably, the more tiles the greater the advantage.

I want to compare this with an analogous contest between a human and a computer to measure how much faster a superior human’s advantage increases in the number of tiles.  Take a human scrabble player with a large vocabulary and have him play the same game against a fast computer with a small vocuabulary.  My guess is that the human’s advantage (which could be negative for a small number of tiles) will increase in the number of tiles, and faster than the stronger computer’s advantage increased in the computer-vs-computer scenario.

Now there may be many reasons for this, but what I am trying to get at is this.  With many tiles, brute-force search quickly plateaus in terms of effectiveness because the additional tiles act as noise making it harder for the computer to find a word in its database.  But when humans construct words, the words “suggest themselves” and increasing the number of tiles facilitates this (or at least hinders it more slowly than it hinders brute-force.)

We will take a first glimpse at applying game theory to confront the incentive problem and understand the design of efficient mechanisms.  The simplest starting point is the efficient allocation of a single object.  In this lecture we look at efficient auctions.  I start with a straw-man:  the first-price sealed bid auction.  This is intended to provoke discussion and get the class to think about the strategic issues bidders face in an auction.  The discussion reaches the conclusion that there is no dominant strategy in a first-price auction and it is hard to predict bidders’ behavior.  For this reason it is easy to imagine a bidder with a high value being outbid by a bidder with a low value and this is inefficient.

The key problem with the first-price auction is that bidders have an incentive to bid less than their value to minimize their payment, but this creates a tricky trade-off as lower bids also mean an increased chance of losing altogether.  With this observation we turn to the second-price auction which clearly removes this trade-off altogether.  On the other hand it seems crazy on its face:  if bidders don’t have to put their money whether mouths are won’t they now want to go in the other direction and raise their bid above their value?

We prove that it is a dominant strategy to bid your value in a second-price auction and that the auction is therefore an efficient mechanism in this setting.

Next we explore some of the limitations of this result.  We look at externalities:  it matters not just whether I get the good, but also who else gets it in the event that I don’t.  We see that a second-price auction is not efficient anymore.  And we look at a setting with common values:  information about the object’s value is dispersed among the bidders.

For the comon-value setting I do a classroom experiment where I auction an unknown amount of cash.  The amount up for sale is equal to the average of the numbers on 10 cards that I have handed out to 10 volunteers.  Each volunteer sees only his own card and then bids.  If the experiment works (it doesnt always work) then we should see the winner’s curse in action:  the winner will typically be the person holding the highest number, and bidding something close to that number will lose money as the average is certainly lower.

Here are the slides.

(I got the idea from the winner’s curse experiment from Ben Polak, who auctions a jar of coins in his game theory class at Yale.  Here is a video. Here is the full set of Ben Polak’s game theory lectures on video.  They are really outstanding.  Northwestern should have a program like this.  All Universities should.)

Wine and movies have a lot in common.  They are both worldwide markets for highly differentiated products with critics who are visible and economically important.  But while there are as many film critics as there are films and opinions about films, there are just a handful of highly influential wine critics, Robert Parker’s Wine Advocate, The Wine Spectator, and a few others.  This is somewhat counterintuitive because there are many, many more wines than films.  Here are a few thoughts.

  1. People know their taste in movies better than they know their taste in wine.  This makes it easier to find idiosyncratic movie critics that have similar tastes.  Similar critics face an entry barrier in the wine world.
  2. All wines taste the same and the role of a critic is just to tell you which wines you are supposed to like and which wines you can brag about drinking.  This creates a natural oligopoly among the wine critics who the market coordinates on.
  3. Wines are given as gifts and movies are not. This means that wine critics are rewarded for reflecting general rather than specialized tastes.
  4. A very small fraction of wines are good and wine criticism just means tasting thousands of wines until you find the good ones.  This creates increasing returns to scale in wine criticism, another source of natural monopoly power.
  5. The movie businesss is less competitive so a blockbuster film earns more rents and as a result there is more rent seeking, especially in marketing.  Thus the emergence of David Manning.  There is no analogous force behind “The feel good wine of the year!”
  6. Wine critics provide a service for wine-makers, film critics are serving film-goers.  What makes a good wine critic is the ability to articulate what wine buyers will buy.  Whoever is best at this will dominate.

Cynics believe some version of 6 and 2 (Parkerization.)  I don’t understand why 5 wouldn’t be the same for wine and film maybe this is just a matter of time.  4 may be true in the mid-range but whether this matters depends on whether you think wine critics are really influential here or rather at the high end where there are relatively few consistent performers.  I lean toward 1, Gary Vaynerchuck notwithstanding, which is a less cynical version of 6.

I believe that the study referred to in this CNN piece is pure noise.  (Don’t bother watching it.  Bottom line:  1 in 5 teens admits to “sexting.”)  But that doesn’t mean that it carries no information.  The mere fact that this claim would be repeated, at the expense of the marginal piece of news, turns pure noise into information.

Evolutionary Psychology and, increasingly, behavioral economics spin a lot of intriguing stories explaining foibles and otherwise mysterious behaviors as the byproduct of various tricks nature utilizes to get us to do her bidding.  I am on record in this blog as being a fan of this methodology.  But I also maintain a healthy skepticism and not just at the tendency to concoct “just-so” stories that often ask us to reformulate our theories of huge chunks of evolutionary history just to explain some nano-economic peculiarity.

Instead, when evaluating some theory of how emotions have evolved to induce us to behave in certain ways, skepticism should be aimed squarely at the basic premise.  The theory must come with a convincing explanation why nature would rely on a blunt instrument like emotions as opposed to all of the other tools at her disposal.  These questions seemed especially pressing when I read the following article about depression as a tool to blunt ambitions:

Dr Nesse’s hypothesis is that, as pain stops you doing damaging physical things, so low mood stops you doing damaging mental ones—in particular, pursuing unreachable goals. Pursuing such goals is a waste of energy and resources. Therefore, he argues, there is likely to be an evolved mechanism that identifies certain goals as unattainable and inhibits their pursuit—and he believes that low mood is at least part of that mechanism.

Why not a simpler mechanism:  just have us figure out that the goal is unattainable and (happily) go do something else? Don’t answer by saying that this emotional incentive mechanism evolved before our brains were advanced enough to do the calculation because the existence of an emotional response indicating the right course of action presupposes that this calculation is being made somewhere in the system.

Even granting that nature finds it convenient to do the calculation sub-(or un-)consciously and then communicate only the results to us, why using emotions?  Plants respond to incentives in the environment and they don’t need emotions to do it, presumably they are just programmed to change their “behavior” when conditions dictate.  Why would nature bother with such a messy, noisy, and indirect system of incentives rather than just give us neutral impulses?

Finally, you could try answering with the argument that evolution does not find optimal solutions, just solutions that work.  But that argument by itself can be made into a defense of everything and we are back to just-so stories.

How often does your mind wander?

Some of the most striking evidence comes from Jonathan Schooler, a psychologist at the University of California at Santa Barbara who is one of the leading researchers on mind wandering. In 2005 he and his colleagues told a group of undergraduates to read the opening chapters of War and Peace on a computer monitor and then to tap a key whenever they realized they were not thinking about what they were reading. On average, the students reported that their minds wandered 5.4 times in a 45-minute session. Other researchers have gotten similar results with simpler tasks, such as pronouncing words or pressing a button in response to seeing particular letters and numbers. Depending on the experiment, people spend up to half their time not thinking about the task at hand—even when they’ve been told explicitly to pay attention.

When I was a kid I thought there was something wrong with me because I would “read” pages at a time without paying attention to what I was reading.  My eyes would crawl over the words and move from line to line and in a certain real sense I was reading but my conscious mind was completely uninvolved.  After a few pages I would notice that I had absorbed nothing.

I still have a wandering mind but over time I have come to view it as a net asset.  The key is learning to teach your wandering mind to leave breadcrumbs.  Because it knows how to get to places that your conscious mind doesn’t.

Because a fair amount of mind wandering happens without our ever noticing, the solutions it lets us reach may come as a surprise. There are many stories in the history of science of great discoveries occurring to people out of the blue. The French mathematician Henri Poincaré once wrote about how he struggled for two weeks with a difficult mathematical proof. He set it aside to take a bus to a geology conference, and the moment he stepped on the bus, the solution came to him. It is possible that mind wandering led him to the solution. John Kounios of Drexel University and his colleagues have done brain scans that capture the moment when people have a sudden insight that lets them solve a word puzzle. Many of the regions that become active during those creative flashes belong to the default network and the executive control system as well.

The article is worth a read. (akubura ack:  Mindhacks)

I came across this philosophy paper (miter missive:  The Browser) which ponders whether the hypothesis of an omnipotent and omniscient God is any more likely to imply that God is good rather than God is evil.

Suppose, for example, that the universe shows clear evidence of having been designed. To conclude, solely on that basis, that the designer is supremely benevolent would be about as unjustified as it would be to conclude that it is, say, supremely malevolent, which clearly would not be justified at all.

The problem always appears at a much more basic level for me.  Suppose you are an omnipotent God.  What do you do?  Obviously to answer that question you should start by identifying all of the feasible alternatives (ok that one is easy, everything is feasible because you are omnipotent), rank them according to your preferences, and do the one that ranks at the top.  Wait a minute.  What are your preferences?

You are omnipotent remember.  Its not just that you get to choose your preferences.  Your preferences do not exist until you create them.  Ok.  So first you choose your preferences then solve the problem of what to do given those preferences.  How do you choose your preferences?  It is no help trying to choose the preferences that are easiest to satisfy blissfully because you are omnipotent.  All preferences are trivial to satisfy blissfully.  But why do you want to want that anyway?  How do you even know what you want to want?  You don’t have any preferences yet right?

So I think that an omnipotent God would be too neruotic to even get out of bed and decide whether to be good or evil.

Should texting, emailing and browsing be banned in meetings? This article discusses the current climate.

Despite resistance, the etiquette debate seems to be tilting in the favor of smartphone use, many executives said. Managing directors do it. Summer associates do it. It spans gender and generation, private and public sectors.

A few years ago, only “the investment banker types” would use BlackBerrys in meetings, said Frank Kneller, the chief executive of a company in Elk Grove Village, Ill., that makes water-treatment systems. “Now it’s everybody.” He said that if he spotted 6 of 10 colleagues tapping away, he knew he had to speed up his presentation.

While I would always prefer to have my iPhone handy, I would volunteer to keep the meeting smartphone free.  And that is not because I want the undivided attention of my colleagues.  If we all deprive ourselves we create high-powered incentives to keep the meeting as short as possible.  That sentiment is echoed here:

Mr. Brotherton, the consultant, wrote in an e-mail message that it was customary now for professionals to lay BlackBerrys or iPhones on a conference table before a meeting — like gunfighters placing their Colt revolvers on the card tables in a saloon. “It’s a not-so-subtle way of signaling ‘I’m connected. I’m busy. I’m important. And if this meeting doesn’t hold my interest, I’ve got 10 other things I can do instead.’ ”

Wimbledon, which has just gotten underway today, is a seeded tournament, like all major tennis events and other elimination tournaments.  Competitors are ranked according to strength and placed into the elimination bracket in a way that matches the strongest against the weakest.  For example, seeding is designed so that when the quarter-finals are reached, the top seed (the strongest player)  will face the 8th seed, the 2nd seed will face the 7th seed, etc.   From the blog Straight Sets:

When Rafael Nadal withdrew from Wimbledon on Friday, there was a reshuffling of the seeds that may have raised a few eyebrows. Here is how it was explained on Wimbledon.org:

The hole at the top of the men’s draw left by Nadal will be filled by the fifth seed, Juan Martin del Potro. Del Potro’s place will be taken by the 17th seed James Blake of the USA. The next to be seeded, Nicolas Kiefer moves to line 56 to take Blake’s position as the 33rd seed. Thiago Alves takes Kiefer’s position on line 61 and is a lucky loser.

Was this simply Wimbledon tweaking the draw at their whim or was there some method to the madness?

Presumably tournaments are seeded in order to make them as exciting as possible for the spectators.  One plausible goal is to maximize the chances that the top two players meet in the final, since viewership peaks considerably for the final.  But the standard seeding is not obviously the optimal one for this objective:  it makes it easy for the top seed to make the final but hard for the second seed.  Switching the positions of the top ranked and second ranked players might increase the chances of having a 1-2 final.

You would also expect that early round matches would be more competitive.  Competitiveness in contests, like tennis matches, is determined by the relative strength of the opponents.  Switching the position of 1 and 2 would even out the matches played by the top player at the expense of unbalancing the matches played by the second player, the average balance across matches would be unchanged.  If effort is concave in the relative strength of the opponents then the total effect would be to increase competitiveness.

When you start thinking about the game theory of tournaments, your first thought is what has Benny Moldovanu said on the subject.  And sure enough, google turns up this paper by Groh, Moldovanu, Sela, and Sunde which seems to have all the answers.  Incidentally, Benny will be visiting Northwestern next fall and I expect that he will be bringing his tennis racket…

One of the highly touted features of the iPhone is the abundance of applications available for near-instantaneous download and seamless installation.  In traditional Apple fashion, in order to keep full control of the software environmnet and maintain this seamless experience, Apple exercises strict control over which apps are made available through the app store.  Short of jailbreaking your phone, there is no other way to install third-party software.

The process by which apps are submitted and reviewed strikes many as highly inefficient.  (It also strikes many as anti-competitive, but that is not the subject of this post.  There are legitimate economic arguments supporting Apple’s control of the platform and for my purposes here I will take those as given, although for now I remain agnostic on the question.) Developers sink significant investment producing launch-ready versions of their software and only then learn definitively whether the app can be sold.  There is no recourse if the submission is denied.

(Just recently, we witnessed an extreme example of the kind of deadweight loss that can result.  A fully licensed, full-featured Commodore64 Operating System emulator, 1 year in the making, was just rejected from the app store. )

Unfortunately, this is an inevitable inefficiency due to the ubiquitous problem of incomplete contracting.  In a first-best world, Apple would publicize an all-encompassing set of rules outlining exactly what software would be accepted and what would be rejected.  In this imaginary world of complete contracts, any developer would know in advance whether his software would be accepted and no effort would be wasted.

In reality it is impossible to conceive of all of the possibilities, let alone describe them in a contract.  Therefore, in this second-best world, at best Apple can publish a broad set of guidelines and then decide on a case-by-case basis when the final product is submitted.  This introduces inefficiencies at two levels.  First, the direct effect is that developers face uncertainty whether their software will pass muster and this is a disincentive to undertake the costly investment at the beginning.

But the more subtle inefficiency arises due to the incentive for gamesmanship that the imperfect contract creates.  First, Apple’s incentive in constructing guidelines ex ante is to err on the side of appearing more permissive than they intend to be.  Apple knows even less than the developers what the final product will look like and Apple values the option to bend the (unwritten) rules a bit when a good product materializes.  While it is true that Apple’s desire to keep a reputation for transparent guidelines mitigates this problem to some extent, the fact remains that Apple does not internalize all the costs of software development.

Second, because Apple cannot predict what software will appear it cannot make binding commitments to reject software that is good but erodes slightly their standards.  This gives developers an incentive to engage in a form of brinkmanship:  sink the cost to create a product highly valued by end users but which is questionable from Apple’s perspective.  By submitting this software the developer puts Apple in the difficult position of publicly rejecting software that end users want and the fear of bad publicity may lead Apple to accept software that they would have like to commit in advance to reject.

The iPhone app store is only a year old and many observers think of it as a short-run system that is quickly becoming overwhelmed by the surprising explosion of iPhone software.  When the app store is reinvented, it will be interesting to see how they approach this unique two-sided incentive problem.

Update: Mark Thoma further develops here.  He didn’t ask in advance for permission to do that, but if he did I would have given encouraging signals but then rejected it ex post.

That’s the title of a new essay by the omnipresent David Levine.  An excerpt:

The key difference between psychologists and economists is that psychologists are interested in individual behavior while economists are interested in explaining the results of groups of people interacting. Psychologists also are focused on human dysfunction – much of the goal of psychology (the bulk of psychologists are in clinical practices) is to help people become more functional. In fact, most people are quite functional most of the time. Hence the focus of economists on people who are “rational.” Certain kinds of events – panics, for example – that are of interest to economist no doubt will benefit from understanding human dysfunctionality. But the balancing of portfolios by mutual fund managers, for example, is not such an obvious candidate. Indeed one of the themes of this essay is that in the experimental lab the simplest model of human behavior – selfish rationality with imperfect learning – does an outstanding job of explaining the bulk of behavior.