You are currently browsing the tag archive for the ‘economics’ tag.

In case you have not been following the catfight, let me get you up to speed.  Chris Anderson wrote a book called Free.  I haven’t read it, but it apparently says “all your ideas are belong to us” because the price of ideas is crashing to zero.  Malcom Gladwell says “please don’t let my employer read that”…I mean, “No its not.”

Let’s have a model.  There are tiny ideas and big ideas.  The tiny ideas are more like facts, or observations or experiences.  They are costless to produce but costly to communicate.  They are highly decentralized in that everybody produces their own heterogenous tiny ideas.  The big ideas are assembled from a large quantity of tiny ideas.  Different people have different production technologies for producing big ideas from small ones.  These could differ just in cost, or also in terms of the quality of big ideas that are produced, it changes the story a little but doesn’t change the economics.

Start with a world where the marginal cost of communicating a tiny idea to another individual is large.  Then the equilibrium market structure has big-idea producers who incur the high cost of acquiring tiny ideas, assemble them into big ideas and communicate the big ideas to the masses for a price.  This market structure sustains high prices for big ideas and sustains entry by big-idea specialists.

Now suppose the marginal cost of communicating the tiny ideas shrinks to zero.  Then an alternative for end users is to assemble their own big ideas for their own consumption out of the tiny ideas they acquire themselves for close to nothing.  The cost disadvantage that the typical end user has is compensated by his ability to customize his palette of tiny ideas and resulting big ideas to complement his idiosyncratic endowment of other ideas, tastes, etc.   The price of big ideas crashes.  Former producers of big ideas exit the market.  This is all efficient.

An important implication of this model is that the products that Anderson expects to be free are not the products Gladwell produces.  So when Gladwell says that this is absurd because the economics do not support big ideas being sold at a price of zero, he is right.  But that is because the big ideas are not being sold at all, and this is all efficient.

Not Exactly Rocket Science describes an experiment in which vervet monkeys are observed to trade grooming favors for fruit.  At first one of the monkeys had an exclusive endowment of fruit and earned a monopoly price.  Next, competition was introduced.  The endowment was now equally divided between two duopolist monkeys and as a result the price in terms of willingness-to-groom dropped.

Now, were the monkeys playing Cournot (marginal cost equals residual marginal revenue) or Bertrand (price equals marginal cost)?  (The marginal cost of trading an apple for a grooming session is the opportunity cost of not eating it.)  We need another treatment with three sellers to know.  If the price falls even further then its Cournot.  In Bertrand the price hits the competitive point already with just two.

Intermediate micro question:  Can Monkey #1 increase his profits by buying the apples from Monkey #2 at the equilibrium price and then acting as a monopolist?

At the blog Everything Finance, Jonathan Parker breaks down the implications of the State of California issuing IOUs to rollover its debts, essentially creating a new currency whose value is pegged to the US Dollar.  He makes a number of interesting points including the observation that since California cannot print Dollars, and cannot issue (conventional) debt, the IOUs place the State in a predicament reminiscent of financially-distressed countries having to defend a pegged exchange rate.

And unfortunately, the history of fixed exchange rates in practice includes lots and lots of these effective defaults.  Governments that can issue these i.o.u.’s and have trouble balancing budgets tend to issue a greater value of their currencies than they have the will or ability to maintain.  And default follows.

Prior to “maturity” will these IOUs trade at some market price reflecting the probability of default?  One question is whether banks will be interested in buying IOUs, offering liquidity in return for the asset and a premium?  The strategic issue is whether politically the State will find it more or less attractive to default if the IOUs are still largely held by private citizens, or instead mostly by banks?

My guess is that, in a crisis, a small number of banks would more effectively pressure the State to meet their obligations than if IOU holdings were less concentrated.  If so, then I would expect banks to be buying IOUs at a steep discount.  But does this create a Grossman-Hart style free-rider problem analogous to tendering shares in takeover bids?

Apparently we have arrived at the long run and we are not dead.

Do you remember the Microsoft anti-trust case?  The anti-trust division of the US Department of Justice sought the breakup of Microsoft for anti-competitive practices mostly centering around integrating Internet Explorer into the Windows operating system.  In fact, an initial ruling found Microsoft in violation of an agreement not to tie new software products into Windows and mandated a breakup, separating the operating systems business from the software applications business.  This ruling was overturned on appeal and evnetually the case was settled with an agreement that imposed no further restrictions on Microsoft’s ability to bundle software but did require Microsoft to share APIs with third-party developers for a 5 year period.

Today, all of the players in that case are mostly irrelevant.  AOL, Netscape, Redhat.  Java.  Indeed, Microsoft itself is close to irrelevance in the sense that any attempt today at exploiting its operating system market power to extend its monopoly would cause at most a short-run adjustment period before it would be ignored.

Microsoft was arguing at the time that it was constantly innovating to maintain its market position and it was impossible to predict from where the next threat to its dominance would appear.  Whether or not the first part of their claim was true, the second part certainly turned out to be so.  It is hard to see a credible case that the Microsoft anti-trust investigation, trial, and settlement played anything more than a negligible role in bringing us to this point.  Indeed the considerations there, focusing on the internals of the operating system and contracts with hardware manufacturers, are orthogonal to developments in the market since then.  The operating system is a client and today clients are perfect substitutes.  The rents go to servers and servers live on the internet unconstrained by any “platform” or “network effects”, indeed creating their own.

The lesson of this experience is that in a rapidly changing landscape, intervention can wait.  Even intervention that looks urgent at the time.  Almost certainly the unexpected will happen that will change everything.

I read news mostly through an rss reader.  The Wall Street Journal syndicates only short excerpts of their articles and if I click through I get a truncated version of the article follwed by a friendly invitation to subscribe to the journal in order to view the rest of the article.  It looks like this.

But its not hard to get the full text of the article.  I just use google and type in the title of the article.  The first link I get is a link to the full text, no subscription required.  I always explained this to myself using a simple market-segmentation idea.  WSJ will not give their content away to someone who is browsing their site directly because that person has revealed a high value for WSJ content.  Someone who is googling has revealed that they are looking for relevant content, without regard to source.  There is more competition for such a user so the price is lower.

But today I noticed that bing, Microsoft’s new search engine, does not get the same special treatment.  If I bing “At Chicken Plant, A Recession Battle,” the link provided leads to the same truncated article as my rss reader.  Since users have free entry across search platforms I can’t see any reason why bing-searchers (bingers?) would be systematically different than googlers in terms of the economics above.  Therefore I am giving up on my theory.  What are the alternatives?

  1. Google has a contract with WSJ?
  2. WSJ would like to shut out googlers too but finds it hard to shut off a service that users have come to expect. Knowing this, they are keeping bingers out from the outset.
  3. The game between content providers has multiple equilibria.  On the google platform they are playing the users’ preferred equilibrium.  On the bing platform they have coordinated on their preferred equilibrium.
  4. Google has figured out a secret back-door that bing hasn’t found and WSJ just hasn’t gotten around to closing.

Ok the ideas are gettng more and more lame.  I am stumped.

Incidentally, there was an article in the New York Times about DOJ investigations of Google, and a Google PR offensive:

“Competition is a click away,” Mr. Wagner says. It’s part of a stump speech he has given in Silicon Valley, New York and Washington for the last few months to reporters, legal scholars, Congressional staff members, industry groups and anybody else who might influence public opinion about Google.

“We are in an industry that is subject to disruption and we can’t take anything for granted,” he adds.

Rings a bell.

We will take a first glimpse at applying game theory to confront the incentive problem and understand the design of efficient mechanisms.  The simplest starting point is the efficient allocation of a single object.  In this lecture we look at efficient auctions.  I start with a straw-man:  the first-price sealed bid auction.  This is intended to provoke discussion and get the class to think about the strategic issues bidders face in an auction.  The discussion reaches the conclusion that there is no dominant strategy in a first-price auction and it is hard to predict bidders’ behavior.  For this reason it is easy to imagine a bidder with a high value being outbid by a bidder with a low value and this is inefficient.

The key problem with the first-price auction is that bidders have an incentive to bid less than their value to minimize their payment, but this creates a tricky trade-off as lower bids also mean an increased chance of losing altogether.  With this observation we turn to the second-price auction which clearly removes this trade-off altogether.  On the other hand it seems crazy on its face:  if bidders don’t have to put their money whether mouths are won’t they now want to go in the other direction and raise their bid above their value?

We prove that it is a dominant strategy to bid your value in a second-price auction and that the auction is therefore an efficient mechanism in this setting.

Next we explore some of the limitations of this result.  We look at externalities:  it matters not just whether I get the good, but also who else gets it in the event that I don’t.  We see that a second-price auction is not efficient anymore.  And we look at a setting with common values:  information about the object’s value is dispersed among the bidders.

For the comon-value setting I do a classroom experiment where I auction an unknown amount of cash.  The amount up for sale is equal to the average of the numbers on 10 cards that I have handed out to 10 volunteers.  Each volunteer sees only his own card and then bids.  If the experiment works (it doesnt always work) then we should see the winner’s curse in action:  the winner will typically be the person holding the highest number, and bidding something close to that number will lose money as the average is certainly lower.

Here are the slides.

(I got the idea from the winner’s curse experiment from Ben Polak, who auctions a jar of coins in his game theory class at Yale.  Here is a video. Here is the full set of Ben Polak’s game theory lectures on video.  They are really outstanding.  Northwestern should have a program like this.  All Universities should.)

Wine and movies have a lot in common.  They are both worldwide markets for highly differentiated products with critics who are visible and economically important.  But while there are as many film critics as there are films and opinions about films, there are just a handful of highly influential wine critics, Robert Parker’s Wine Advocate, The Wine Spectator, and a few others.  This is somewhat counterintuitive because there are many, many more wines than films.  Here are a few thoughts.

  1. People know their taste in movies better than they know their taste in wine.  This makes it easier to find idiosyncratic movie critics that have similar tastes.  Similar critics face an entry barrier in the wine world.
  2. All wines taste the same and the role of a critic is just to tell you which wines you are supposed to like and which wines you can brag about drinking.  This creates a natural oligopoly among the wine critics who the market coordinates on.
  3. Wines are given as gifts and movies are not. This means that wine critics are rewarded for reflecting general rather than specialized tastes.
  4. A very small fraction of wines are good and wine criticism just means tasting thousands of wines until you find the good ones.  This creates increasing returns to scale in wine criticism, another source of natural monopoly power.
  5. The movie businesss is less competitive so a blockbuster film earns more rents and as a result there is more rent seeking, especially in marketing.  Thus the emergence of David Manning.  There is no analogous force behind “The feel good wine of the year!”
  6. Wine critics provide a service for wine-makers, film critics are serving film-goers.  What makes a good wine critic is the ability to articulate what wine buyers will buy.  Whoever is best at this will dominate.

Cynics believe some version of 6 and 2 (Parkerization.)  I don’t understand why 5 wouldn’t be the same for wine and film maybe this is just a matter of time.  4 may be true in the mid-range but whether this matters depends on whether you think wine critics are really influential here or rather at the high end where there are relatively few consistent performers.  I lean toward 1, Gary Vaynerchuck notwithstanding, which is a less cynical version of 6.

One of the highly touted features of the iPhone is the abundance of applications available for near-instantaneous download and seamless installation.  In traditional Apple fashion, in order to keep full control of the software environmnet and maintain this seamless experience, Apple exercises strict control over which apps are made available through the app store.  Short of jailbreaking your phone, there is no other way to install third-party software.

The process by which apps are submitted and reviewed strikes many as highly inefficient.  (It also strikes many as anti-competitive, but that is not the subject of this post.  There are legitimate economic arguments supporting Apple’s control of the platform and for my purposes here I will take those as given, although for now I remain agnostic on the question.) Developers sink significant investment producing launch-ready versions of their software and only then learn definitively whether the app can be sold.  There is no recourse if the submission is denied.

(Just recently, we witnessed an extreme example of the kind of deadweight loss that can result.  A fully licensed, full-featured Commodore64 Operating System emulator, 1 year in the making, was just rejected from the app store. )

Unfortunately, this is an inevitable inefficiency due to the ubiquitous problem of incomplete contracting.  In a first-best world, Apple would publicize an all-encompassing set of rules outlining exactly what software would be accepted and what would be rejected.  In this imaginary world of complete contracts, any developer would know in advance whether his software would be accepted and no effort would be wasted.

In reality it is impossible to conceive of all of the possibilities, let alone describe them in a contract.  Therefore, in this second-best world, at best Apple can publish a broad set of guidelines and then decide on a case-by-case basis when the final product is submitted.  This introduces inefficiencies at two levels.  First, the direct effect is that developers face uncertainty whether their software will pass muster and this is a disincentive to undertake the costly investment at the beginning.

But the more subtle inefficiency arises due to the incentive for gamesmanship that the imperfect contract creates.  First, Apple’s incentive in constructing guidelines ex ante is to err on the side of appearing more permissive than they intend to be.  Apple knows even less than the developers what the final product will look like and Apple values the option to bend the (unwritten) rules a bit when a good product materializes.  While it is true that Apple’s desire to keep a reputation for transparent guidelines mitigates this problem to some extent, the fact remains that Apple does not internalize all the costs of software development.

Second, because Apple cannot predict what software will appear it cannot make binding commitments to reject software that is good but erodes slightly their standards.  This gives developers an incentive to engage in a form of brinkmanship:  sink the cost to create a product highly valued by end users but which is questionable from Apple’s perspective.  By submitting this software the developer puts Apple in the difficult position of publicly rejecting software that end users want and the fear of bad publicity may lead Apple to accept software that they would have like to commit in advance to reject.

The iPhone app store is only a year old and many observers think of it as a short-run system that is quickly becoming overwhelmed by the surprising explosion of iPhone software.  When the app store is reinvented, it will be interesting to see how they approach this unique two-sided incentive problem.

Update: Mark Thoma further develops here.  He didn’t ask in advance for permission to do that, but if he did I would have given encouraging signals but then rejected it ex post.

That’s the title of a new essay by the omnipresent David Levine.  An excerpt:

The key difference between psychologists and economists is that psychologists are interested in individual behavior while economists are interested in explaining the results of groups of people interacting. Psychologists also are focused on human dysfunction – much of the goal of psychology (the bulk of psychologists are in clinical practices) is to help people become more functional. In fact, most people are quite functional most of the time. Hence the focus of economists on people who are “rational.” Certain kinds of events – panics, for example – that are of interest to economist no doubt will benefit from understanding human dysfunctionality. But the balancing of portfolios by mutual fund managers, for example, is not such an obvious candidate. Indeed one of the themes of this essay is that in the experimental lab the simplest model of human behavior – selfish rationality with imperfect learning – does an outstanding job of explaining the bulk of behavior.

Jonah Lehrer suggests leveraging “mental accounting” to create a free lunch by imposing a tax on homeowners to pay for energy retro-fitting that they won’t notice because of its small size relative to the price of the house:

But I can already hear the naysayers: Won’t homeowners object? Won’t that just add thousands of dollars to the cost of buying a home?Enter mental accounting, an irrational bias that can be tweaked to produce positive outcomes. Because a home is already such a gigantic purchase, and because the home buying process is already so saturated in peculiar fees (inspection charges, loan points, escrow fees, mortgage broker expenses, etc.) I’d argue that consumers will be much less sensitive to the cost of a home renovation. They’ll barely even notice the $5000 “energy efficiency charge” when it appears on their massive bill from the real estate agent. (Besides, they’ll get a big chunk of the money back as a tax credit.) In other words, they’ll act just like me the last time I stayed at a fancy hotel, when I ordered the internet and ate the peanuts.

A Slate article reports that in surveys the proportion of people who say they voted for Obama over McCain does not match the results of the election.   Of course this panders to my inner economist.  I’m interested in how much of the difference can be attributed to outright lying versus self-deception.  An outright liar knows he is lying while credible self-deception involves some chance you actually believe the story you tell yourself.

It would be cool to have an experiment that distinguished between the two.  Maybe it’s already out there?

Now we have set the stage.  We are considering social choice problems with transferrable utility.  We want to achieve Pareto efficient outcomes which in this context is equivalent to utilitarianism.

Now we face the next problem.  How do we know what the efficient policy is? It of course depends on the preferences of individuals and any institution must implicitly involve providing a medium through which preferences are communicated and mediated.  In this lecture I introduce this idea in the context of a simple example.

Two roomates are condering purchasing an espresso machine.  The machine costs $50.  Each has a maximum willingness to pay, but each knows only his own willingness to pay and not the others.  It is efficient to buy the machine if and only if the sum exceeds $50.  They have to decide two things:  whether or not to buy the machine and how to share the cost.  I ask the class what they would do in this situation.

A natural proposal is to share the cost equally.  I show that this is inefficient because it may be that one roomate has a high willingness to pay, say $40, and the other has a low willingness to pay, say $20.  The sum exceeds $50 but one roomate will reject splitting the cost.  This leads to discussion of how to improve the mechanism.  Students propose clever mechanisms and we work out how each of them can be manipulated and we discover the conflict between efficiency and incentive-compatibility.  There is scope for some very engaging class discussions here that create a good mindset for the coming more careful treatment.

At this stage I tell the students that these mechanisms create something like a game played by the roomates and if we are going to get a good handle on how institutions perform we need to start by developing a theory of how people play games like this.  So we will take a quick detour into game theory.

For most of this class, very little game theory is necessary.  So I begin by giving the basic notation and defining dominated and dominant strategies.  I introduce all of these concepts through a hilarious video:  The Golden Balls Split or Steal Game (which I have blogged here before.)  I play the beginning video to setup the situation, then pause it and show how the game described in the video can be formally captured in our notation.  Next I play the middle of the video where the two players engage in “pre-play communication.”  I pause the video and have a discussion about what the players should do and whether they think that communication should matter.  I poll the class on what they would do and what they predict the two players will do.  Then I show them the dominant strategies.

Finally I play the conclusion of the video.  Its a pretty fun moment.   I conclude with a game to play in class.  This year I had just started using Twitter and I came up with a fun game to play on Twitter.  I blogged about this game previously.

(By the way this game is extremely interesting theoretically.  I am pretty confident that this game would always succeed in implementing the desired outcome: getting the target number of players to sign up, but it is not easy to analyze because of the continuous time nature.  The basic logic is this:  if you think that the target will not be met, then you should sign up immediately.  But then the target will be met.)

Here are the lecture slides.

Apparently the price you are quoted when you search for fares on Spain’s high-speed railway depends on whether you search in English or Spanish:

When I searched the site earlier that day from my office, I searched in Spanish. A one-way ticket from Barcelona to Madrid could be had for around 44 euros on a “tarifa Web,” their Internet special fare with 30 day advance purchase.

When I was at home, ready to finalize my purchase, I opted to search with the site language set to English. The price was nearly 110 euros.

The economic logic is standard:  language is a way to segment the market and this segmentation is profitable if the two markets have a large difference in price-sensitivity.  Presumably if you are searching in English then you are a tourist and you have fewer alternative modes of transportation.  This makes you less price-sensitive.

I thank the well-travelled and multi-lingual Mallesh Pai for the pointer.

This is not one of those arrangements where donors can sponsor a needy child or a sorghum farmer in the developing world. The person asking for help is a 21-year-old neurobiology major at Harvard, and she is requesting a loan from Harvard alumni.

The service, Unithrive, resembles micro-lending in a number of ways (except perhaps the sticker.)

Unithrive, which made its debut last month, matches alumni lenders and cash-strapped students, who post photographs and biographical information and can request up to $2,000. The loans are interest-free and payable within five years of graduation.

See the article in the New York Times.

We used to be in denial that there were any bubbles, now everything is a bubble.   This article in the Chronicle of Higher Education sounds the alarm on higher education (tassle twirl:  lone gunman.)

Is it possible that higher education might be the next bubble to burst? Some early warnings suggest that it could be.

With tuitions, fees, and room and board at dozens of colleges now reaching $50,000 a year, the ability to sustain private higher education for all but the very well-heeled is questionable. According to the National Center for Public Policy and Higher Education, over the past 25 years, average college tuition and fees have risen by 440 percent — more than four times the rate of inflation and almost twice the rate of medical care. Patrick M. Callan, the center’s president, has warned that low-income students will find college unaffordable.

Meanwhile, the middle class, which has paid for higher education in the past mainly by taking out loans, may now be precluded from doing so as the private student-loan market has all but dried up.

The analogy to the housing bubble is certainly tempting.  Pell grants and Stafford Loans are to Colleges what Fannie and Freddie are to housing.  It is undeniable that easy access to credit fueled rises in tuition.  It is not a stretch to think of these loan programs as essentially subsidies to Universities as they raise tuition dollar for every dollar of loans that are essentially forgiven.

But the analogy doesn’t go any farther than that.  There is no speculation fueling demand for higher education.  There is a permanent and measurable difference in earnings for college graduates.  There will continue to be a robust market for credit to students because, to borrow a phrase, consumption wants to be smoothed.  And unlike subsidized loans for housing, there is a real externality that justifies continued federal presence in the student loan market.

After the showstopper that is Arrow’s Theorem, we could just throw in the towel.  The motivation for studying social welfare functions was to find a coherent standard by which to judge institutions and to propose policies.  Now we see that there is no coherent standard.  Well students you are not getting away so easily, after all this is only the second week of the course.  We will accept that we must violate one of the axioms.  Which one do we choose?

A lot of normative economic theory is implicitly built upon one of two welfare criteria, either Pareto efficiency or utilitarianism.  While it is standard to formally define Pareto efficiency in an undergraduate micro class, utilitarianism is often invoked without explicit mention.  For example, we are implicitly using some form of utilitarianism when we talk about consumer and producer surplus.  And to argue that a monopoly is inefficient in a partial equilibrium framework is a utilitarian judgment (absent compensating transfers.)

So I make it explicit.  And I take the time to formally define utilitarianism, explain where it applies and what justifies it and I point out its limitations.  In terms of Arrow’s theorem I tell the students that we are dropping the axiom of universal domain (UD.)  That is, we are not requiring our social welfare function to apply in all situations, only in those situations in which there is a valid measure of welfare that can be transferred and/or compared inter-personally.  In this class, that measure of welfare is willingness to pay, and it applies when there are monetary transfers available and all agents value money in equal terms, i.e. quasi-linear utility.

These lectures contain one important formal result.  In the quasi-linear world with monetary transfers utilitarianism coincides with Pareto efficiency.  So these two common welfare standards are the same.  (Any utilitarian improvement can be made into a Pareto improvement with judiciously chosen transfers and any Pareto improvement is a utilitarian improvement.)

Here are the notes.

Via MR, some thoughts on carbon taxes:

However, this does not necessarily mean that revenue-neutral CO2 taxes, or auctioned allowance systems, produce a “double dividend” by reducing the costs of the broader tax system in addition to slowing climate change. There is a counteracting, “tax-interaction” effect (e.g., Goulder 1995). Specifically, the (policy-induced) increase in energy prices drives up the general price level, which reduces real factor returns, and thereby (slightly) reduces factor supply and efficiency.

Indeed, a triple dividend.  The reason, say, labor supply will fall is that the marginal labor was being sold to buy the marginal output that we have decided should not be produced because of the externality.  So this was part of the plan.

Greg Mankiw thinks B-School economists are “practical” and “empirical” while Econ Dept economists are free to be abstract and theoretical.

I don’t think this is true for the research done by economists differs across these two types of schools but it is true that the teaching is different.  The MEDS Dept at Kellogg where I work is somewhat different from other business schools as it  has always been very theory focused.  The Econ group at Stanford GSB is similar.  Some of the best work in game theory, contract theory and decision theory came out of these departments.

It is the case, as Mankiw says, that teaching has to be practical and useful in a  B School.  Whether that drives research or not depends on the philosophy of the school.  I have never felt any pressure for my research to be practical.

Mankiw writes his post to answer David Brooks’s query about why B School economists are giving him better answers about the current state of the economy.  As finance is a B School specialty, it very natural that B schools profs may know more about what a CDS is without having to look it up on Wikipedia! But again, finance economists are not more “practical” or “empirical” than econ dept economists.  I bet Doug Diamond and Milt Harris at Chicago GSB have really perceptive things to say about the financial crisis as has Oliver Hart at Harvard Econ.  They will use simple, clear models (hopefully!), to explain their ideas about how to fix incentives in the financial sector.    And then maybe somone will give a little theory a chance as much as data analysis!

A post at Freakonomics (and accompanying article at Slate) advocates protection against price depreciation as a way to prop up housing prices:

Sellers could commit to reimbursing their buyers for any fall in the average value of homes in their area in the year following a sale. Such price protection would give buyers confidence that they won’t regret their purchases even if the market does fall further and cheaper houses come on offer — confidence that they need in order to buy now. And if buyers gain confidence, prices won’t fall, so sellers won’t have to pay. … And it’s natural for sellers to provide the insurance that price protection involves. If they can’t sell their houses, they’re going to end up bearing the house price risk anyway.

Here are some other things sellers could do to keep prices from falling:

  1. Commit to compensate buyers for future appreciation on the home they move out of
  2. Throw in tuition for the neighborhood private schools
  3. Remodel the kitchen

I teach undergraduate intermediate microeconomics, a 10 week course that is the second in a two-part seqeunce at Northwestern University.  I have developed a unique approach to intermediate micro based originally on a course designed by my former colleague Kim-Sau Chung.  The goal is to study the main themes of microeconomics from an institution- and in particular market-free approach.  To illustrate what I mean, when I cover public goods, I do not start by showing the inefficiency of market provided public goods.  Instead I ask what are the possibilities and limitations of any institution for providing public goods.  By doing this I illustrate the basic difficulty without confounding it with the additional problems that come from market provision.  I do similar things with externalities, informational asymmetries, and monopoly.

All of this is done using the tools of dominant-strategy mechanism design.  This enables me to talk about basic economic problems in their purest form.  Once we see the problems posed by the environments mentioned above, we investigate efficiency  in the problem of allocating private goods with no externalities.  A cornerstone of the course is a dominant-strategy version of the Myerson-Satterthwaite theorem which shows the basic friction that any institution must overcome.  We then investigate mechanisms for efficient allocation in large economies and we see that the institutions that achieve this begin to resemble markets.

Only at this stage do markets become the primary lens through which to study microeconomics.  We look at a simple model competition among profit-maximizing auctioneers and a sketch of convergence to competitive equilibrium.  Then we finish with a brief look at general equilibrium in pure exchange economies and the welfare theorems.

There is a minimal amount of game theory, mostly just developing the tools necessary to use mechanism design in dominant strategies, but also a side trip into Nash equilibrium and mixed strategies.

In the coming weeks I will be posting here my lecture notes with a brief introduction to the themes of each.  I am distributing these notes under the Creative Commons attribution, non-commercial, share-alike license.  Briefly, you are free to use these for any non-commercial purpose but you must give credit where credit is due.  And you are free to make any changes you wish, but you must make available your modifications under the same license.

Today I am posting my notes for the first week, on welfare economics.

I begin with welfare economics because I think it is important to address at the very beginning what standard we should be using to evaluate economic institutions.  And students learn a lot from just being confronted with the formal question of what is a sensible welfare standard.  Naturally these lectures build to Arrow’s theorem, first discussing the axioms and motivating them and then stating the impossibility result.  In previous years I would present a proof of Arrow’s theorem but recently I have stopped doing that because it is time consuming and bogs the course down at an early stage.  This is one of the casualties of the quarter system.

James Joyce’s Ulysses? The Great Gatsby?  Something challenging by Thomas Pynchon? Something whimsical by P.G. Wodehouse?

No, the smart vote  goes to Isaac Asimov’s Foundations Trilogy.

The latest fan to come out in public is Hal Varian.  In a Wired interview, he says:

“In Isaac Asimov’s first Foundation Trilogy, there was a character who basically constructed mathematical models of society, and I thought this was a really exciting idea. When I went to college, I looked around for that subject. It turned out to be economics.”

The first time I saw a reference to the books was in an interview with Roger Myerson in 2002.  And he repeated the fact that he was influenced by Foundation in an answer to a  question after he got the Nobel Prize in 2007.  And finally, Paul Krugman also credits the books with inspiring him to become an economist.   A distinguished trio of endorsements!

Asimov’s stories revolve around the plan of Hari Seldon a “psychohistorian” to influence the political and economic course of the universe.   Seldon uses mathematical methodology to predict the end of the current Empire.  He sets up two “Foundations” or colonies of knowledge to reduce the length of the dark age that will follow the end of empire.  The first Foundation is defeated by a weird mutant called the Mule.  But the Mule fails to locate and kill the Second Foundation. So, Seldon manages to preserve human knowledge and perhaps even predicted the Mule using psychohistory.  Seldon also has a keen sense of portfolio diversification – two Foundations rather than one – and also the law of large numbers – psychohistory is good at predicting events involving a large number of agents but not at forecasting individual actions.

As the above discussion reveals, I did take a stab at reading these books after I saw the Myerson inteview (though I admit I used Wikipedia liberally to jog my memory for this post!).  And you can also see how Myerson’s “mechanism design” theory might have come out reading Asimov.  I enjoyed reading the first book in the trilogy and it’s clear how it might excite a teenage boy with an aptitude for maths.  The next two books are much worse.  I struggled through them just to find out how it all ended.  Perhaps I read them when I was too old to appreciate them.

The Lord of the Rings is probably wooden to someone who reads it in their forties.  It still sparkles for me.

Greg Mankiw is trying to make a reductio ad absurdum critique of the objective of income redistribution.  He has written a paper with Matthew Weinzierl which shows that optimal taxation will typically involve taxing all kinds of characteristics that seem patently unfair and unacceptable.  He concludes from this that it is the goal of income redistribution that entails these absurdities.

But there is a prominent guy who lives at a nice home at 1600 Pennsylvania Avenue who wants to “spread the wealth around.” The moral and political philosophy used to justify such income redistribution is most often a form of Utilitarianism. For example, the work on optimal tax theory by Emmanuel Saez, the most recent winner of the John Bates Clark award, is essentially Utilitarian in its approach.

The point of our paper is this: If you are going to take that philosophy seriously, you have to take all of the implications seriously. And one of those implications is the optimality of taxing height and other exogenous personal characteristics correlated with income-producing abilities.

This argument fails because the objectionable policies implied by optimal taxation in his model have nothing to do with income redistribution or utilitarianism.  Indeed they would be optimal under the weaker and unassailable welfare standard of Pareto efficiency which I would assume Mankiw embraces.

Let me summarize.  Optimal taxation involves minimizing the distortionary effect on output from raising some required level of revenue.  It does not matter what that revenue is being used for.  It could be for redistribution but it could also be for producing public goods that will benefit everyone.  Whatever revenue is required, the optimal taxation policy generates this revenue with minimal cost in terms of reduced incentives for private production.    Taxing exogenous and observable characteristics that are correlated with productivity is a way of generating revenue without distorting incentives.

If we tax income (a direct measure of productivity) you can lower your taxes by earning less, that is a distortion.  If we tax your height (known to be correlated with productivity), you cannot avoid these taxes by making yourself shorter.

So the implication that Mankiw wants us to be uncomfortable with is an implication of the way optimal tax theorists conceive the problem of revenue generation and the implication would be present regardless of how we imagine that tax revenue being spent. It has nothing to do with redistribution and we can feel uncomfortable with height taxation without that making us think twice about our desire to redistribute wealth.

Via kottke.org, here is the first installment of an Errol Morris essay on Han van Meegeren, the Dutch artist who duped the art world into thinking that his paintings were the work of Vermeer.  Morris concludes with the following

To be sure, the Van Meegeren story raises many, many questions. Among them: what makes a work of art great? Is it the signature of (or attribution to) an acknowledged master? Is it just a name? Or is it a name implying a provenance? With a photograph we may be interested in the photographer but also in what the photograph is of. With a painting this is often turned around, we may be interested in what the painting is of, but we are primarily interested in the question: who made it? Who held a brush to canvas and painted it? Whether it is the work of an acclaimed master like Vermeer or a duplicitous forger like Van Meegeren — we want to know more.

The economics version of this question is why the price of a painting would fall just because it has been discovered to be a forgery by technical means and not because the painting was considered of lesser quality.  And a corollary question.  If you own a painting which is thought by all to be a genuine Vermeer, why would you or anyone invest to find out whether it was a forgery.  There is probably a good answer to this that doesn’t require resorting to the assumption that buyers value the name more than the painting.

The value of a painting is the flow value of having it hang on your wall plus the eventual resale value.  For the truly immortal works of art the flow value is negligible relative to the resale value.  The resale value is linked to the flow value to the person to whom it will be sold to, the person she will sell it to, etc.  Ultimately this means that the price is determined by the sequence of people who have the greatest appreciation for art, since they will be willing to pay the most for the flow value. The existence of just one person in that sequence who is sensitive enough to distinguish a true Vermeer from a Van Meegeren implies a large difference in the prices, even if that person is not alive today and will not be for many generations.

The French Open began on Sunday and if you are an avid fan like me the first thing you noticed is that the Tennis Channel has taken a deeper cut of the exclusive cable television broadcast in the United States.  I don’t subscribe to the Tennis channel and until this year they have been only a slight nuisance, taking a few hours here and there and the doubles finals. But as I look over the TV schedule for the next two weeks I see signs of a sea change.

First of all, only the TC had the French Open on Memorial Day, yesterday.  This I think was true last year as well, but now this year all of the early session live coverage for the entire tournament is exclusive on TC.  ESPN2 takes over for the afternoon session and will broadcast early session games on tape.

This got me thinking about the economics of broadcasting rights.  I poked around and discovered in fact that the TC owns all US cable broadcasting rights for the French Open many years to come.  ESPN2 is subleasing those rights from TC for the segments they are airing.  So that is interesting.  Why is TC outbidding ESPN2 for the rights and then selling most of them back?

Two forces are at work here.  First, ESPN2 as a general sports broadcaster has more valuable alternative uses for the air time and so their opportunity cost of airing the French Open is higher.  But of course the other side is that ESPN2 can generate a larger audience just from spillovers and self-advertising than TC so their value for rights to the French Open is higher. One of these effects outweighs the other and so on net the French Open is more valuable to one of these two networks.  Naively we should think that whoever that is would outbid the other and air the tournament.  So what explains this hybrid arrangement?

My answer is that there is uncertainty about the TC’s ability to generate enough audience for a grand slam to make it more valuable for TC than for ESPN.  In face of this TC wants a deal which allows it to experiment on a small scale and find out what it can do but also leaves it the option of selling back the rights if the news is bad.  TC can manufacture such a deal by buying the exclusive rights.  ESPN2 knows its net value for the French Open and will bid that value for the original rights.  And if it loses the bidding it will always be willing to buy those rights at the same price on the secondary market from TC. TC will outbid ESPN2 because the value of the option is at least the resale price and in fact strictly higher if there is a chance that the news is good.

So, the fact that TC has steadily reduced the amount of time it is selling back to ESPN2 suggests that so far the news is looking good and there is a good chance that soon the TC will be the exclusive cable broadcaster for the French Open and maybe even other grand slams.

Bad news for me because in my area the TC is not broadcast in HD and so it is simply not worth the extra cost to subscribe. While we are on the subject, here is my French Open outlook

  1. Federer beat Nadal convincingly in Madrid last week.  I expect them in the final and this could bode well for Federer.
  2. If there is anybody who will spoil that outcome it will be Verdasco who I believe is in Nadal’s half of the draw.  The best match of the tournament will be Nadal-Verdasco if they meet.
  3. The Frenchmen are all fun but they don’t seem to have the staying power.  Andy Murray lost a lot psychologically when he was crowing going into this year’s Australian and lost early.
  4. I always root for Tipsarevich.  And against Roddick.
  5. All of the excitement on the women’s side from the past few years seems to have completely disappeared with the retirement of Henin, the injury to Sharapova and the meltdown of the Serbs.  I expect a Williams-Williams yawner.

Google determines quality scores by calculating multiple factors, including the relevance of the ad to the specific keyword or keywords, the quality of the landing page the ad is linked to, and, above all, the percentage of times users actually click on a given ad when it appears on a results page. (Other factors, Google won’t even discuss.) There’s also a penalty invoked when the ad quality is too low—in such cases, the company slaps a minimum bid on the advertiser. Google explains that this practice—reviled by many companies affected by it—protects users from being exposed to irrelevant or annoying ads that would sour people on sponsored links in general. Several lawsuits have been filed by would-be advertisers who claim that they are victims of an arbitrary process by a quasi monopoly.

What is the distortion?  One example would be an advertiser who is targeting a very select segment of the market and expects few to click through but expects a lot of money from those that do.  This advertiser is willing to pay a lot but may be excluded on quality score.  So one view is that Google is transferring value from high-paying niche consumers to the rest of the market.

However, for every set of keywords there is another market.  Google would optimally lower the quality penalty on searches using keywords that reveal that the searcher is really looking for a niche product. With this view the quality score is a mechanism for preventing unraveling of an efficient market segmentation.

The article is in Wired and it looks at Hal Varian, chief economist at Google.

I’m sipping my morning coffee and glancing at the Sunday New York Times when my 8-year old son asks “What is stillborn?”.  I choke on my coffee a bit and open my eyes wider to see a photo of several women in Tanzania burying a tiny stillborn baby on the front page of the paper. After a quick answer that doesn’t invite much discussion I flip the paper over so that the distressing photo is no longer in view.  Only to see another photo — this one of a college student feeding a lamb with a bottle in upstate New York  — that makes the first photo even more depressing.  Apparently in the U.S. we have a surfeit of college educated young people to care for newborn lambs, while in Tanzania (and in distressingly many other places throughout the developing world) mothers and their babies die because there are too few people with the requisite skills to care for them.

The credit card companies are claiming they will have to charge annual fees and cut reward programs for customers who always pay on time because they are being forced to stop ripping off confused customers who incur late fees and sudden doubling of their interest rates.  Ed Yingling, President of the American bankers’ association warns:

“It will be a different business,” said Edward L. Yingling, the chief executive of the American Bankers Association, which has been lobbying Congress for more lenient legislation on behalf of the nation’s biggest banks. “Those that manage their credit well will in some degree subsidize those that have credit problems.”

The idea seems to be that since the price is being cut for the people with credit problem, it will have to be increased for those with good credit.

I claim this is does not make any sense and is not going to happen.  There are two reasons for this.

To understand the first reason, we must consider why credit card companies charge different prices to different consumers in the first place.  This is a form of price discrimination.  To people with lots of outside options, you have to give a good deal – this is the rationale for reward programs for good risks.  For people with few options, you can afford to raise the price – this is the rationale for high interest rates  for the high risk consumers.  To implement price discrimination you have to be able to identify people in the two groups.  The credit card companies have access to both internal and shared data to do this.  You make profits in both markets,  with higher profits presumably in the high risk market if you manage the risk correctly.  If you cannot price-discriminate because you do not have the information or are not allowed to do so by law, you set a uniform price, hiking up the price to the low risk and lowering it for the high risk.  This is the idea Yingling is suggesting.  For a monopolist, uniform pricing makes less profit that price discrimination as it is less targeted to cnsumers’ willingness to pay.

The new legislation is limiting the firms’ ability to impose terms on the high risk consumers.  So, they will make less money in that segment.  But the key point is – legislation is not outlawing their ability to price  discriminate. There is no incentive for them to do uniform pricing as they still have the information, ability and incentive to price discriminate.  As long as the different segments have different patterns of willingness to pay for credit card services, the rationale for price discrimination is present.

Moreover, there is second reason why fees will not go up – competition.  The low risk consumers are profitable as they generate merchant fees because they use their cards frequently.  Suppose all the credit card companies cut rewards and/or impose annual fees.  Then, one company or another has an incentive to cut the fee or increase rewards to steal customers from another.  In fact, this is the most fundamental force keeping fees down and rewards high – the low risk, high volume consumers of credit card services generate revenue.  To entice them to get your card, you have to give them rewards up front.  The legislation does not change this competitive logic.

So, look forward to more points that help keep up the constant upgrading of your iPod.

(Hat tips: Kellogg MBA students – Aneesha Banerjee, Ondrej  Dusek, and Steven Jackson)

Today I heard about the following experiment.  Subjects were given a number to memorize.  Half of the subjects were given 7 digit numbers and half were given 2 digit numbers.  The subjects were asked to walk across a hallway to another room and report the number to the person waiting there.  If they reported the correct number they were going to earn some money.  On the way, there was a cart with coupons available that could be redeemed for a snack.  There were coupons for chocolate cake and coupons for fruit salad.  Subjects could take only one or the other before proceeding to the end of the corridor and completing their participation in the experiment.

63% of the subjects memorizing 7 digit numbers picked the chocolate cake.

Only 49% of the subjects memorizing 2 digit numbers picked the chocolate cake.

I can see two possible explanations of this.  One is very interesting one is more prosaic.  What’s your explanation?  I will post mine, and more information tomorrow.

Update: The experiment is in the paper “Heart and Mind in Conflict:  The Interplay of Affect and Cognition in Consumer Decision Making” by Shiv and Fedorikhin.  Unfortunately I cannot find an ungated version.  It is published in the Journal of Consumer Research 1999.  I heard about the experiment from a seminar given by David LevineHere is the paper he presented which is partially motivated by this and other experiments.

Our interpretations are similar.  The interesting interpretation is that we have an impulse to pick the chocolate cake and we moderate that impulse with a part of the brain which is also typically engaged in conscious high-level thinking.  When it is occuppied by memorizing 7 digit numbers the impulse runs wild.

The less interesting interpretation is that when we dont have the capacity to think about what to choose we just choose whatever catches our attention first or most prominently, independent of how “tempting” it is.  One aspect of the study which raises suspicion is the following.  In the main treatment, the coupons were on a table where threre was displayed an actual piece of chocolate cake and and a bowl of fruit salad.  This treatment gave the results I quoted above.  In a separate treatment, there was just a photograph of the two.  In that treatment the number of digits being memorized made no difference in the coupon taken.

The authors explain this by saying that the actual cake is more tempting than a picture.  That’s plausible, but it would be nice to have something more convincing.  Would we get the same result as in the main treatment if instead of chocolate cake and fruit salad we had yogurt and fruit salad?

Here is a new paper on the economics of open-source software by Michael Schwarz and Yuri Takhteyev.  They approach the subject from an interesting angle.  Most authors are focused on the question of why people contribute to open-source.  Instead these authors point out that people contribute to all kinds of public goods all the time and there should be no surprise that people contribute to open-source software.  Instead, the question should be why do contributions to open source software turn out to be so much more important than say, giving away free haircuts.

The answer lies in a key advantage open-source has over proprietary software.  Imagine you are starting a business and you are considering adopting some proprietary software and this will require you to train your staff to use it and make other complementary investments that are specific to the software.  You make yourself vulnerable to hold-up:  when new versions of the software are released, the seller’s pricing will take advantage of your commitment to the software.  Open source software is guaranteed to be free even after improvements are made so users can safely make complementary investments without fear of holdup.

The theory explains some interesting facts about the software market.  For example, did you know that all major desktop programming languages have open source compilers?  But there are no open source tools for developing games for consoles such as the X-box.

The paper outlines this theory and describes how it fits with the emergence of open source over the years.  The detailed history alone is worth reading.

Start your QJE clocks!  I just submitted my paper Kludged (rhymes with Qjed) to the Quarterly Journal of Economics.  The QJE has a reputation for speedy rejections.  For me this is a virtue.  Obviously I prefer not to be rejected, (although for some a QJE rejection is a well-earned badge of honor) but conditional on being rejected (always the most likely outcome), the sooner the better.

Addendum: Alas, the paper was rejected 😦  It took about 3 1/2 months and I received 4 thoughtful referee reports.  All in all I would say I was treated fairly.