The Kellogg Graduate School of Management at Northwestern has begun syndicating the various blogs published by Kellogg faculty, including Cheap Talk.  Its great to have the Kellogg endorsement.  And there are many excellent blogs, worth taking a look.

Via MR, some thoughts on carbon taxes:

However, this does not necessarily mean that revenue-neutral CO2 taxes, or auctioned allowance systems, produce a “double dividend” by reducing the costs of the broader tax system in addition to slowing climate change. There is a counteracting, “tax-interaction” effect (e.g., Goulder 1995). Specifically, the (policy-induced) increase in energy prices drives up the general price level, which reduces real factor returns, and thereby (slightly) reduces factor supply and efficiency.

Indeed, a triple dividend.  The reason, say, labor supply will fall is that the marginal labor was being sold to buy the marginal output that we have decided should not be produced because of the externality.  So this was part of the plan.

Greg Mankiw thinks B-School economists are “practical” and “empirical” while Econ Dept economists are free to be abstract and theoretical.

I don’t think this is true for the research done by economists differs across these two types of schools but it is true that the teaching is different.  The MEDS Dept at Kellogg where I work is somewhat different from other business schools as it  has always been very theory focused.  The Econ group at Stanford GSB is similar.  Some of the best work in game theory, contract theory and decision theory came out of these departments.

It is the case, as Mankiw says, that teaching has to be practical and useful in a  B School.  Whether that drives research or not depends on the philosophy of the school.  I have never felt any pressure for my research to be practical.

Mankiw writes his post to answer David Brooks’s query about why B School economists are giving him better answers about the current state of the economy.  As finance is a B School specialty, it very natural that B schools profs may know more about what a CDS is without having to look it up on Wikipedia! But again, finance economists are not more “practical” or “empirical” than econ dept economists.  I bet Doug Diamond and Milt Harris at Chicago GSB have really perceptive things to say about the financial crisis as has Oliver Hart at Harvard Econ.  They will use simple, clear models (hopefully!), to explain their ideas about how to fix incentives in the financial sector.    And then maybe somone will give a little theory a chance as much as data analysis!

In this case . . . while the challenged packaging contains the word “berries” it does so only in conjunction with the descriptive term “crunch.” This Court is not aware of, nor has Plaintiff alleged the existence of, any actual fruit referred to as a “crunchberry.” Furthermore, the “Crunchberries” depicted on the [box] are round, crunchy, brightly-colored cereal balls, and the [box] clearly states both that the Product contains “sweetened corn & oat cereal” and that the cereal is “enlarged to show texture.” Thus, a reasonable consumer would not be deceived into believing that the Product in the instant case contained a fruit that does not exist. . . . So far as this Court has been made aware, there is no such fruit growing in the wild or occurring naturally in any part of the world.

see here. (Shako shake:  BoingBoing)

Today at Peet’s in Evanston I was trying to work out a model for this idea Sandeep and I are working on related to the game theory of torture.  I started drawing a litle graph and then got lost in thought.  I must have looked a little weird (nothing unusual there) because the woman next to me started asking me what was up with this squiggly plot on my pad of paper.

Most economists dread these moments when someone asks you what you do and you have to tell them you’re an economist and then prepare to deflect the inevitable questions and/or accusations “what’s going to happen with interest rates?”  “when’s the economy going to turn around?”  Usually I just mumble and wait for the person to get bored and go on with her reading.  For some reason I was talkative today.

I told her I was a game theorist.  “What’s that for?” I told her I was working on a theory of torture.  She looked horrified.  “How do you make a theory of torture?”

I told her that using game theory is a lot like screenwriting.   Imagine you were a film-maker and you wanted to make a point about torture.  You would invent characters and put them in the roles of torturer and torturee and you would describe the events.  You would depict how the torturer would plan his torture and how he would the torturee would react and how this would lead the torturer to adjust his approach.  If the film was going to be effective it would have believable characters and it would have to show the audience a plausible hypothetical situation and what happens when these characters act out their roles in that situation.  In short, its a model.

(As I was saying this I remembered that I learned to think of economics and literature in this way about 20 years ago from Tyler Cowen.  And he has a nice paper on it here.)

She looked even more horrified.  But I was pretty pleased.  I started thinking about Resevoir Dogs (nsfw).

While scripts and models are constrained by a similar requirement of coherence between character and events, there are differences and this makes them complementary.  A model necessarily maps out the entire game tree, while a script describes just one path.  In a model every counterfactual is analyzed and we see their consequences and this explains why those paths are not taken, but a film is a far more vivid account of the path taken.  In a model the off-equilibrium outcomes are the results of mistakes while a well-conceived script can bring in plausible external developments to place the characters in unexpected situations.

Of course film-makers get invited to better parties.

A post at Freakonomics suggests macroeconomics is in more trouble than microeconomics because there is less room for empirical work because there is less data:

In microeconomics, at least there is an abundance of good data, so people who are good at measuring and describing things can succeed. But in macro there is not much data, so most of the rewards are for the mathematics, not the empirics.

As  a micro-theorist and hence an outsider, it seems to me that Hari Seldon and other psychohistorians are wrong: events involving large numbers of people and firms are harder to predict than those involving a few. In micro for example, is much harder to understand imperfect competition than it is to understand say monopoly price discrimination.  Even if competition is perfect, we know from general equilibrium theory that there is still a multiple equilibrium problem that makes it hard to predict economic trends. And if we allow monopolies so we can predict trends more easily, here is my main prediction: prices will go up, output will go down, the stock market will go up and consumers will be worse off.

A post at Freakonomics (and accompanying article at Slate) advocates protection against price depreciation as a way to prop up housing prices:

Sellers could commit to reimbursing their buyers for any fall in the average value of homes in their area in the year following a sale. Such price protection would give buyers confidence that they won’t regret their purchases even if the market does fall further and cheaper houses come on offer — confidence that they need in order to buy now. And if buyers gain confidence, prices won’t fall, so sellers won’t have to pay. … And it’s natural for sellers to provide the insurance that price protection involves. If they can’t sell their houses, they’re going to end up bearing the house price risk anyway.

Here are some other things sellers could do to keep prices from falling:

  1. Commit to compensate buyers for future appreciation on the home they move out of
  2. Throw in tuition for the neighborhood private schools
  3. Remodel the kitchen

“Bob Dylan drew upon a rich lode of old folk tunes for most of his early songs,” Hyde writes. “That’s not theft; that’s the folk tradition at its best.” It seems that nearly two-thirds of Dylan’s work between 1961-63 — some 50 songs — were reinterpretations of American folk classics. In today’s corporate-creative environment, in which Disney was allowed to change the basic nature of copyright law back in the 90s so that their signature mouse wouldn’t fall into the public domain, Dylan’s early work would’ve landed him in court.

from a post at Mental Floss.  The punchline:

Hyde argues that “there are good reasons to manage scarce resources through market forces, but cultural commons are never by nature scarce, so why enclose them far into the future with the fences of copyright and patent?

I am generally opposed to IP law, but I think this oversimplifies.  There is room for argument about patents.  (For example, I came across this story today about drugs for rare diseases.  It is hard to see how drugs that will benefit a total of 3 people on the whole planet can be financed without monopoly rents.) However, copyright for music and other creative works is a solution to a non-existent incentive problem.

But I am somebody who is very anxious to have the Afghan government and the Pakistani government have the capacity to ensure that those safe havens don’t exist. And so it, I think, will be an important reminder that we have no territorial ambitions in Afghanistan; we don’t have an interest in exploiting the resources of Afghanistan. What we want is simply that people aren’t hanging out in Afghanistan who are plotting to bomb the United States.

Obama said this in an interview with NPR (transcript.)  He actually says “hangin’ out” but the transcriber apparently wanted to maintain an air of formality and wrote “hanging.”  You can hear it here, around the 12:30 mark.  He chuckles a bit when he says it.

These are conspicuoulsy different ways for a President to talk, especially about something as serious as terrorism.  It says something about the man himself and it also draws a sharp contrast with Bush, whose standard catch phrase at these moments would be “rout out the terrorists.”

Previous installment in the series.

In an article about their famous restaurant surveys, Nina and Tim Zagat write

Over the years that we’ve spent surveying hundreds of thousands of diners, one fact becomes clear: Service is *the* weak link in the restaurant industry. How do we know? Roughly 70% of all complaints we receive relate to service. Collectively, complaints about food prices, noise, crowding, smoking, and even parking make up only 30%. Moreover, the average rating for food on our 30-point scale is usually two points higher than the average rating for service. Given the fact that identical people are voting, and that there are hundreds of thousands of them, this deficit is dramatic.

They go on to give some advice to the restaurant industry for improving service.  But don’t these results say that in fact we don’t care about service?  They show that we choose the restaurants with good food despite their bad service.  Sure we complain about the service, other things equal who doesn’t want better service.  But we can live with bad service if we get good food.

Its a standard example of a game that has no Nash equilibrium.  But what exactly are the rules of the game?  How about these:

You have fifteen seconds. Using standard math notation, English words, or both, name a single whole number—not an infinity—on a blank index card. Be precise enough for any reasonable modern mathematician to determine exactly what number you’ve named, by consulting only your card and, if necessary, the published literature.

Hmm… maybe it does have a Nash equilbirium.  But after reading the article (highly recommended), I am still not sure.  I think it comes down to whether or not the players are Turing machines.  (Fez flip: The Browser)

No, not because of this, although it can get rough.

I teach the third course in the first year PhD micro sequence at Northwestern and I also teach my intermediate micro course in the Spring.  I am just finishing up teaching this week and my students will soon be writing their evaluations of me.  They will grade me on a scale of 1 to 6.

Because I am the third and last teacher they will evaluate this year, I face some additional risk that my predecessors did not.  Back in the fall, when they evaluated their first teacher they had only one data point with which to estimate the distribution of teaching ability in the Northwestern economics faculty. An outstanding performance would lead them to revise upward their beliefs and a poor performance would revise their beliefs downward.

As a result, when the students sit down to evaluate their fall professor, even a very good performance will earn at most a 5 because the students, now anticipating higher average performance in the winter and spring, will be inclined to hold that 6 in reserve for the best.  Likewise, very bad performances will have their ratings buoyed by the student’s desire to save the 1 for the worst.

When Spring comes, there is nothing more to learn.  By now they know the distribution and the only thing left to do is to rank their Spring professor relative to those who came earlier.  If he is best he gets a 6, if not he gets at most a 4.  His rating is a mean-preserving spread of the previous ratings.

There is a general principle at work here.  The older you get the more you know about your opportunity costs, the more decisively you act in response to unanticipated opportunities.  (There is a countervailing force which I believe on net makes us more conservative when we get older, but that is the topic of a later post.)

I guess I am the Tyrone Slothrop of Northwestern University.  I’ve been doing research on the theory of the “democratic peace” – the finding that democracies rarely attack each other.  This has been called “an empirical law” in international relations.  This idea is famous enough that it is offered as a rationalization for spreading democracy by both left- and right-wing politicians.

Why might democracies be more peaceful?  And how about a regime like Iran?  Fareed Zakaria says : “Iran isn’t a dictatorship. It is certainly not a democracy.”  It is something in the middle.  There are elections but an elite also controls many things such as the appointment of the Supreme Leader who has enormous power.

I have done some research with David Lucca and Tomas Sjostrom where we offer a theory for why these regimes which we call limited democracies might be the most warlike of all.  And the data does suggest that countries like Iran are very warlike, especially when facing a similar limited democracy.

Here is brief attempt to explain the theory informally – it is done using game theory in the paper.  Conflict occurs via combination of greed and fear – two of the causes of war according to the great Greek historian Thucydides.  Each side does not know if the other is motivated by greed or fear.  Greedy leaders are hawkish.  But, even if one side is not greedy, they turn aggressive because the other side may be greedy.  So, both sides become aggressive whether it is because of greed or fear of greed.  We study how political institutions can control greed or stimulate fear.

In fact, the logic above is our model of dictatorship where leaders interact with no thought for the wishes of their citizens.  It is our pure model of greed and fear.  It is inspired by the famous logic of the “reciprocal fear of surprise attack” due to Thomas Schelling.

In a democracy, the voters may punish a leader who starts a war unnecessarily. As leaders want to stay in power, this controls greed.  But the voters may also punish a leader who is weak in the face of aggression.  This unleashes fear as democratic leaders are aggressive in case they are too dovish in an aggressive environment. So, democracies can be peaceful against each other as dovish voters control their leaders.  But they can turn aggressive very rapidly if they are concerned their opponent will be aggressive.  In a dictatorship, the leader does not fear losing power but no-one controls his greed.

Now, suppose the leader can survive in power if he pleases the voters or if he satisfies a hawkish minority who favor war.  This regime has some properties of  a democracy – the leader survives in power in the same scenarios as the leader of a full democracy.  But he also survives if he starts an unnecessary war – just like a dictator would.  The leader only loses power if he is dovish in the face of aggression.  Then, neither the average citizen nor the hawks support him.  This type of regime which we call a limited democracy is the most aggressive of all.  The leader fears losing power and the voters cannot control his greed.  So, a little democracy can make things worse if it leads to a regime like this.

The theory leads to a bunch of predictions which we try to confirm in data.  I took a shot at explaining the ideas in a talk I gave to Kellogg MBAs.  The video is here in case you’re interested (you need Real Player to view it).  The article is here (you need Adobe Acrobat to view it).

The town I live in is facing a zoning controversy.  An old family-run restaurant on a downtown corner has gone out of business and put the property up for sale.  The high bidder is Dairy Queen.  But the town’s zoning board is set to reject the sale.

At first there doesn’t seem to be any economic rationale for elected representatives of the town stopping what the citizens of the town are evidently voting for with their dollars.  The argument would be that the reason Dairy Queen is the high bidder is that Dairy Queen expects to earn the most in that location.  Since their earnings come from providing a valuable product and service, this must mean that giving the space to Dairy Queen will generate the most value for the citizens of my town.  Why doesn’t the zoning board see this?

Well, they just might be smart enough to see that the simple argument I have given is flawed.  The flaw is that it assumes that Dairy Queen faces the same market conditions as any other bidder for the space.

Bidding for the right to enter a market is determined not by the amount of value the business will create, but the amount of that value that the business gets to keep.  The share of value that the business gets to keep is determined by market conditions.  Generally, businesses that face competition get a smaller share of the value they create than businesses with less competition.

Because of this, unregulated markets for scare commercial real estate will not necessarily lead to an efficient allocation.  A bank may be more valuable to the community and yet lose the bidding to Dairy Queen.  Zoning boards can, in principle, correct this by intervening.

A similar logic is at work in pollution-permit trading markets, although with a twist.  The naive argument is that the social cost of a unit of carbon emissions is the same regardless of who is the emitter, but the benefits vary.  And the benefits will be reflected in the polluters’ willingness to pay for permits.  If we attach a high value the output of producer A, then producer A should be more willing to pay for the right to produce (and therefore pollute) than producer B whose output we value less.

But again this depends on the market conditions.  Producer B might be in a competitive market where, at the margin, it internalizes all of the gains from increased output and Producer A might be a monopolist whose marginal revenue is less than price and therefore internalizes only a fraction of the gains.

(The twist is that pollution rights are divisible and so the appropriate calculation is at the margin which flips the comparison between competitive and monopolistic producers.  Real estate is indivisible (or at least much less divisible) and so average calculations take over.)

This points to an advantage of a carbon tax over a market-based permit system.  A carbon tax can be customized by industry and market conditions.  A permit market treats all polluters equally.

Both know how to use the tactics of the Prisoner’s Dilemma to get their subjects to squeal.  George Stephanopoulos explains it this way:

“He flashes a glimpse of what he knows, shaded in a largely negative light, with the hint of more to come, setting up a series of prisoner’s dilemmas in which each prospective source faces a choice: Do you cooperate and elaborate in return (you hope) for learning more and earning a better portrayal–for your boss and yourself? Or do you call his bluff by walking away in the hope that your reticence will make the final product less authoritative and therefore less damaging? If no one talks, there is no book. But someone–then everyone–always talks.”

And according to Matt Alexander in “How to Break a Terrorist…” the Prisoner’s Dilemma is a still mainstay in the arsenal of methods employed against Al Qaeda.

Nice to know that the story we use to motivate the first game anyone learns in a game theory course might actually be true.

I teach undergraduate intermediate microeconomics, a 10 week course that is the second in a two-part seqeunce at Northwestern University.  I have developed a unique approach to intermediate micro based originally on a course designed by my former colleague Kim-Sau Chung.  The goal is to study the main themes of microeconomics from an institution- and in particular market-free approach.  To illustrate what I mean, when I cover public goods, I do not start by showing the inefficiency of market provided public goods.  Instead I ask what are the possibilities and limitations of any institution for providing public goods.  By doing this I illustrate the basic difficulty without confounding it with the additional problems that come from market provision.  I do similar things with externalities, informational asymmetries, and monopoly.

All of this is done using the tools of dominant-strategy mechanism design.  This enables me to talk about basic economic problems in their purest form.  Once we see the problems posed by the environments mentioned above, we investigate efficiency  in the problem of allocating private goods with no externalities.  A cornerstone of the course is a dominant-strategy version of the Myerson-Satterthwaite theorem which shows the basic friction that any institution must overcome.  We then investigate mechanisms for efficient allocation in large economies and we see that the institutions that achieve this begin to resemble markets.

Only at this stage do markets become the primary lens through which to study microeconomics.  We look at a simple model competition among profit-maximizing auctioneers and a sketch of convergence to competitive equilibrium.  Then we finish with a brief look at general equilibrium in pure exchange economies and the welfare theorems.

There is a minimal amount of game theory, mostly just developing the tools necessary to use mechanism design in dominant strategies, but also a side trip into Nash equilibrium and mixed strategies.

In the coming weeks I will be posting here my lecture notes with a brief introduction to the themes of each.  I am distributing these notes under the Creative Commons attribution, non-commercial, share-alike license.  Briefly, you are free to use these for any non-commercial purpose but you must give credit where credit is due.  And you are free to make any changes you wish, but you must make available your modifications under the same license.

Today I am posting my notes for the first week, on welfare economics.

I begin with welfare economics because I think it is important to address at the very beginning what standard we should be using to evaluate economic institutions.  And students learn a lot from just being confronted with the formal question of what is a sensible welfare standard.  Naturally these lectures build to Arrow’s theorem, first discussing the axioms and motivating them and then stating the impossibility result.  In previous years I would present a proof of Arrow’s theorem but recently I have stopped doing that because it is time consuming and bogs the course down at an early stage.  This is one of the casualties of the quarter system.

James Joyce’s Ulysses? The Great Gatsby?  Something challenging by Thomas Pynchon? Something whimsical by P.G. Wodehouse?

No, the smart vote  goes to Isaac Asimov’s Foundations Trilogy.

The latest fan to come out in public is Hal Varian.  In a Wired interview, he says:

“In Isaac Asimov’s first Foundation Trilogy, there was a character who basically constructed mathematical models of society, and I thought this was a really exciting idea. When I went to college, I looked around for that subject. It turned out to be economics.”

The first time I saw a reference to the books was in an interview with Roger Myerson in 2002.  And he repeated the fact that he was influenced by Foundation in an answer to a  question after he got the Nobel Prize in 2007.  And finally, Paul Krugman also credits the books with inspiring him to become an economist.   A distinguished trio of endorsements!

Asimov’s stories revolve around the plan of Hari Seldon a “psychohistorian” to influence the political and economic course of the universe.   Seldon uses mathematical methodology to predict the end of the current Empire.  He sets up two “Foundations” or colonies of knowledge to reduce the length of the dark age that will follow the end of empire.  The first Foundation is defeated by a weird mutant called the Mule.  But the Mule fails to locate and kill the Second Foundation. So, Seldon manages to preserve human knowledge and perhaps even predicted the Mule using psychohistory.  Seldon also has a keen sense of portfolio diversification – two Foundations rather than one – and also the law of large numbers – psychohistory is good at predicting events involving a large number of agents but not at forecasting individual actions.

As the above discussion reveals, I did take a stab at reading these books after I saw the Myerson inteview (though I admit I used Wikipedia liberally to jog my memory for this post!).  And you can also see how Myerson’s “mechanism design” theory might have come out reading Asimov.  I enjoyed reading the first book in the trilogy and it’s clear how it might excite a teenage boy with an aptitude for maths.  The next two books are much worse.  I struggled through them just to find out how it all ended.  Perhaps I read them when I was too old to appreciate them.

The Lord of the Rings is probably wooden to someone who reads it in their forties.  It still sparkles for me.

Storn White, lifestyle artist.

Like most San Franciscans, Charles Pitts is wired. Mr. Pitts, who is 37 years old, has accounts on Facebook, MySpace and Twitter. He runs an Internet forum on Yahoo, reads news online and keeps in touch with friends via email. The tough part is managing this digital lifestyle from his residence under a highway bridge.

The article is here. Another highlight:

Michael Ross creates his own electricity, with a gas generator perched outside his yellow-and-blue tent. For a year, Mr. Ross has stood guard at a parking lot for construction equipment, under a deal with the owner. Mr. Ross figures he has been homeless for about 15 years, surviving on his Army pension.

Inside the tent, the taciturn 50-year-old has an HP laptop with a 17-inch screen and 320 gigabytes of data storage, as well as four extra hard drives that can hold another 1,000 gigabytes, the equivalent of 200 DVDs. Mr. Ross loves movies. He rents some from Netflix and Blockbuster online and downloads others over an Ethernet connection at the San Francisco public library.

Greg Mankiw is trying to make a reductio ad absurdum critique of the objective of income redistribution.  He has written a paper with Matthew Weinzierl which shows that optimal taxation will typically involve taxing all kinds of characteristics that seem patently unfair and unacceptable.  He concludes from this that it is the goal of income redistribution that entails these absurdities.

But there is a prominent guy who lives at a nice home at 1600 Pennsylvania Avenue who wants to “spread the wealth around.” The moral and political philosophy used to justify such income redistribution is most often a form of Utilitarianism. For example, the work on optimal tax theory by Emmanuel Saez, the most recent winner of the John Bates Clark award, is essentially Utilitarian in its approach.

The point of our paper is this: If you are going to take that philosophy seriously, you have to take all of the implications seriously. And one of those implications is the optimality of taxing height and other exogenous personal characteristics correlated with income-producing abilities.

This argument fails because the objectionable policies implied by optimal taxation in his model have nothing to do with income redistribution or utilitarianism.  Indeed they would be optimal under the weaker and unassailable welfare standard of Pareto efficiency which I would assume Mankiw embraces.

Let me summarize.  Optimal taxation involves minimizing the distortionary effect on output from raising some required level of revenue.  It does not matter what that revenue is being used for.  It could be for redistribution but it could also be for producing public goods that will benefit everyone.  Whatever revenue is required, the optimal taxation policy generates this revenue with minimal cost in terms of reduced incentives for private production.    Taxing exogenous and observable characteristics that are correlated with productivity is a way of generating revenue without distorting incentives.

If we tax income (a direct measure of productivity) you can lower your taxes by earning less, that is a distortion.  If we tax your height (known to be correlated with productivity), you cannot avoid these taxes by making yourself shorter.

So the implication that Mankiw wants us to be uncomfortable with is an implication of the way optimal tax theorists conceive the problem of revenue generation and the implication would be present regardless of how we imagine that tax revenue being spent. It has nothing to do with redistribution and we can feel uncomfortable with height taxation without that making us think twice about our desire to redistribute wealth.

Here is an interesting article about the history of the Ivy league and the member Universities’ attitudes toward sport.

The Ivy is never going to be the Southeastern Conference—and nobody is suggesting it should be. The schools don’t need the exposure of sports to attract students and alumni donations. But some of the league’s alumni complain that the schools offer their students the best of everything, except in this one area. “Why not give them the same opportunities and the same platform in athletics that you do in academics?” says Marcellus Wiley, a former NFL defensive end who played at Columbia in the 1990s. “I think they should revisit everything.”

If we take the objective to be maintaining reputation and attracting donations then there is a broader question.  Why is the  concentration among schools which compete on academic excellence so much higher than among those that compete on athletics?  Competition for dominance in sport appears to be  more costly and occurs at a higher frequency that the competition for academic excellence.  Some possible reasons:

  1. There is more variance in academic talent than in talent in sports.  Thus the top end is thinner and the market is smaller.
  2. There is more continuity in academic strength purely because of numbers.  A bad recruiting class for the basketball team a few years in a row and you are back to square one.  A freshman class at Harvard is large enough that variations wash out.
  3. It is easier to throw money at sport.  One coach makes the whole program.  Assessing the talent of faculty and attracting it with money is more complicated.  And maybe irrelevant.

I would like to believe 1 but I don’t.  I would like not to believe 3 but its hard. I do believe 2.

Oliver Hart and Luigi Zingales wrote an op-ed with their plan to regulate large financial institutions (LFI) which are too big to fail.  I’ve blogged about their idea before but here it is again quickly:

Suppose a Credit Default Swap CDS pays off if Citibank, say, fails.  Different traders of the Citibank CDS have different information about the chance that Citibank may fail.  The price of the Citibank CDS aggregates that information.  The price will be high if the chance of failure are high.  Hence, a regulator can monitor the Citibank CDS price and step in to force the LFI to cover it loans or be taken over.

They have fleshed this clever plan out with a model but the main idea remains the same.

Here is one problem I see:  If the CDS price is high, the LFI is meant to issue more equity to cover the debt against which CDS is trading.   Suppose it just refuses – i.e. it breaks the rules. In their scheme, the regulator is meant to step in and put the firm in the hands of a receiver who recapitalizes it and sells it.  But this is costly for the regulator.  Maybe the market reacts to the takeover badly and systemic risk spreads through the financial system.  The regulator can instead keep the current managers employed and bail out the LFI using taxpayer money.  If this option makes the managers better off, this is what they will push for.  This is what we see GM doing to avoid bankrupcy for instance, in their case trying to exploit the political risk of bankrupcy rather than systemic risk.

If the threat to enforce rules is not credible, the LFI has the incentive to “hold-up” regulator when the CDS price goes up.  The mechanism proposed by Hart and Zingales is not credible as there is a commitment problem.

A post at Language Log explores the use of mathematics in linguistics.  It closes with

Anyhow, my conclusion is that anyone interested in the rational investigation of language ought to learn at least a certain minimum amount of mathematics.

Unfortunately, the current mathematical curriculum (at least in American colleges and universities) is not very helpful in accomplishing this — and in this respect everyone else is just as badly served as linguists are — because it mostly teaches thing that people don’t really need to know, like calculus, while leaving out almost all of the things that they will really be able to use. (In this respect, the role of college calculus seems to me rather like the role of Latin and Greek in 19th-century education:  it’s almost entirely useless to most of the students who are forced to learn it, and its main function is as a social and intellectual gatekeeper, passing through just those students who are willing and able to learn to perform a prescribed set of complex and meaningless rituals.)

Before getting into economics and after getting out of physics, I took calculus and found it very useful and interesting for its own sake.  I do see that the way calculus is taught in the US is geared toward engineers and physicists, but I have a hard time thinking of what mathematics would substitute for calculus in the undergraduate curriculum if the goal was to teach students something useful.  It can’t be analysis or topology.  I took abstract algebra as an undergraduate and found it esoteric and boring.  Discrete mathematics?  OK maybe statistics, but don’t you need integration for that?  Help me out here, if you had the choice, what would you replace calculus with? And remember the goal is to teach something useful.

Via kottke.org, here is the first installment of an Errol Morris essay on Han van Meegeren, the Dutch artist who duped the art world into thinking that his paintings were the work of Vermeer.  Morris concludes with the following

To be sure, the Van Meegeren story raises many, many questions. Among them: what makes a work of art great? Is it the signature of (or attribution to) an acknowledged master? Is it just a name? Or is it a name implying a provenance? With a photograph we may be interested in the photographer but also in what the photograph is of. With a painting this is often turned around, we may be interested in what the painting is of, but we are primarily interested in the question: who made it? Who held a brush to canvas and painted it? Whether it is the work of an acclaimed master like Vermeer or a duplicitous forger like Van Meegeren — we want to know more.

The economics version of this question is why the price of a painting would fall just because it has been discovered to be a forgery by technical means and not because the painting was considered of lesser quality.  And a corollary question.  If you own a painting which is thought by all to be a genuine Vermeer, why would you or anyone invest to find out whether it was a forgery.  There is probably a good answer to this that doesn’t require resorting to the assumption that buyers value the name more than the painting.

The value of a painting is the flow value of having it hang on your wall plus the eventual resale value.  For the truly immortal works of art the flow value is negligible relative to the resale value.  The resale value is linked to the flow value to the person to whom it will be sold to, the person she will sell it to, etc.  Ultimately this means that the price is determined by the sequence of people who have the greatest appreciation for art, since they will be willing to pay the most for the flow value. The existence of just one person in that sequence who is sensitive enough to distinguish a true Vermeer from a Van Meegeren implies a large difference in the prices, even if that person is not alive today and will not be for many generations.

“These are relatively simple physical equations, so you program them into the computer and therefore kind of let the computer animate things for you, using those physics,” said May. “So in every frame of the animation, (the computer can) literally compute the forces acting on those balloons, (so) that they’re buoyant, that their strings are attached, that wind is blowing through them. And based on those forces, we can compute how the balloon should move.”

This process is known as procedural animation, and is described by an algorithm or set of equations, and is in stark contrast to what is known as key frame animation, in which the animators explicitly define the movement of an object or objects in every frame.

Why stop there?  Next, we can use models from the behavioral sciences, program a few equations and let the characters, dialog, and action animate themselves by following the solution of the model.  Don’t believe me? Here’s how to procedurally animate Romeo and Juliet.

Tom Schelling has a famous example illustrating how to solve coordination problems.  Suppose you are supposed to meet someone in New York City but you forgot to specify a location to meet.  This was before the era of cell phones so there is no opportunity for cooperation before you pick a place to go.  Where do you go?  You go where your friend thinks you are most likely to go, which is of course where she thinks you think she thinks you are most likely to go, etc.

Notice that convenience or taste or proximity have no direct bearing on your choice.  These considerations may indirectly influence your choice, but only if she thinks you think she thinks … that they will influence your choice.

There was an old game show called the Newlywed Game where I learned some of my very early training as a game theorist in my living room roughly at the age of 7.  Here is how the show works.  4 pairs of newlyweds were competing.  The husbands, say, would be on stage first, with the wives in an isolated room.  The husbands would be asked a series of questions about their wives, say “What wedding gift from your family does your wife hate the most?” and the husbands would have to guess what the wives would say.  (This was the 70’s so every episode had at least one question about “making whoopee,” like “what movie star would your wife say you best remind her of when you’re makin’ whoopee?”)

When you watch this show every night for as long as I did you soon figure out that the way to win this show is to disregard completely the question and just find something to say that you wife is likely to say, which is of course what she thinks you think she is likely to say, etc.  You could try to make a plan with your newlywed spouse beforehand about what to say, something like the first answer is “the crock pot”, the second answer is “burt reynolds” etc.  But this looks awkward when the first question turns out to be “What is your wife’s favorite room to make whoopee?” etc.

So the problem is just like Schelling’s meeting problem.  The truth is of secondary importance.  You want to find the most obvious answer, i.e. the one your wife is most likely to give because she thinks you are most likely to give it, etc.    For example, if the question is, “Which Smurf will your wife say best describes your style of makin’ whoopee?” then even though you think the answer is probably “Clumsy Smurf” or “Sloppy Smurf”, you say “Hefty Smurf” because that is the obvious answer.

smurfs-hefty-smurf-100x100

Ok, all of this is setup to tell you that Gary Becker is clearly a better game theorist than Steve Levitt.  Via Freakonomics, Levitt tells the story of a Chicago economics faculty Newlywed game played at their annual skit party.  (Northwestern is one of the few top departments that doesn’t have one of these.  That sucks.)  Becker and Levitt were newlyweds.  According to Levitt they did poorly, but it looks like Becker was onto the right strategy, but Levitt was trying to figure out the right answers:

The first question was, “Who is Gary’s favorite economist?” I thought I knew this one for sure. I guessed Milton Friedman. Gary answered Adam Smith. (Although he later apologized to me and said Friedman was the right answer.)

Then they asked, “In Gary’s opinion, how many more quarters will the current recession last?” I guessed he would say three more quarters, but his actual answer was two more quarters.

The next question was, “Who does Gary think will win the next Nobel prize in economics?” This is a hard one, because there are so many reasonable guesses. I figured if Becker writes a blog with Posner, he might think Posner would win the Nobel prize, so that was my answer. Gary said Gene Fama instead.

The last question we got wrong was one that was posed to Gary, asking which of the following three people I would most like to have lunch with: Marilyn Monroe, Napolean, or Karl Marx. I know Gary has a major crush on Marilyn Monroe, so that was the answer I gave, even though the question was about who I would want to have lunch with, not who Gary would want to have lunch with. Gary answered Karl Marx (which makes me wonder what he thinks of me), but did volunteer, as I strongly suspected, that he himself would of course prefer Marilyn to either of the other two.

But close:

You take all of the conflict, all of the chaos, all of the noise, and out of that comes this precise mathematical distribution of the way attacks are ordered in this conflict. This blew our mind. Why should a conflict like Iraq have this as its fundamental signature? Why should there be order in war? We didn’t really understand that. We thought maybe there is something special about Iraq. So we looked at a few more conflicts. We looked at Colombia, we looked at Afghanistan, and we looked at Senegal.

See the TED talk. (hat tip:  The Browser)

The French Open began on Sunday and if you are an avid fan like me the first thing you noticed is that the Tennis Channel has taken a deeper cut of the exclusive cable television broadcast in the United States.  I don’t subscribe to the Tennis channel and until this year they have been only a slight nuisance, taking a few hours here and there and the doubles finals. But as I look over the TV schedule for the next two weeks I see signs of a sea change.

First of all, only the TC had the French Open on Memorial Day, yesterday.  This I think was true last year as well, but now this year all of the early session live coverage for the entire tournament is exclusive on TC.  ESPN2 takes over for the afternoon session and will broadcast early session games on tape.

This got me thinking about the economics of broadcasting rights.  I poked around and discovered in fact that the TC owns all US cable broadcasting rights for the French Open many years to come.  ESPN2 is subleasing those rights from TC for the segments they are airing.  So that is interesting.  Why is TC outbidding ESPN2 for the rights and then selling most of them back?

Two forces are at work here.  First, ESPN2 as a general sports broadcaster has more valuable alternative uses for the air time and so their opportunity cost of airing the French Open is higher.  But of course the other side is that ESPN2 can generate a larger audience just from spillovers and self-advertising than TC so their value for rights to the French Open is higher. One of these effects outweighs the other and so on net the French Open is more valuable to one of these two networks.  Naively we should think that whoever that is would outbid the other and air the tournament.  So what explains this hybrid arrangement?

My answer is that there is uncertainty about the TC’s ability to generate enough audience for a grand slam to make it more valuable for TC than for ESPN.  In face of this TC wants a deal which allows it to experiment on a small scale and find out what it can do but also leaves it the option of selling back the rights if the news is bad.  TC can manufacture such a deal by buying the exclusive rights.  ESPN2 knows its net value for the French Open and will bid that value for the original rights.  And if it loses the bidding it will always be willing to buy those rights at the same price on the secondary market from TC. TC will outbid ESPN2 because the value of the option is at least the resale price and in fact strictly higher if there is a chance that the news is good.

So, the fact that TC has steadily reduced the amount of time it is selling back to ESPN2 suggests that so far the news is looking good and there is a good chance that soon the TC will be the exclusive cable broadcaster for the French Open and maybe even other grand slams.

Bad news for me because in my area the TC is not broadcast in HD and so it is simply not worth the extra cost to subscribe. While we are on the subject, here is my French Open outlook

  1. Federer beat Nadal convincingly in Madrid last week.  I expect them in the final and this could bode well for Federer.
  2. If there is anybody who will spoil that outcome it will be Verdasco who I believe is in Nadal’s half of the draw.  The best match of the tournament will be Nadal-Verdasco if they meet.
  3. The Frenchmen are all fun but they don’t seem to have the staying power.  Andy Murray lost a lot psychologically when he was crowing going into this year’s Australian and lost early.
  4. I always root for Tipsarevich.  And against Roddick.
  5. All of the excitement on the women’s side from the past few years seems to have completely disappeared with the retirement of Henin, the injury to Sharapova and the meltdown of the Serbs.  I expect a Williams-Williams yawner.

Turns out that a good way to predict how the US Supreme Court will rule is by counting the number of questions asked to either side.  The winning side will be the one with the fewest questions asked.  Is this because

  1. the justices have made up their minds already and ask more questions of the losing side, or
  2. more questions put the lawyer on the defensive, weakening his position?

That is, does outcome cause the questions or the other way around?  I think it has to be the fomer, indeed the latter eventually implies the former.  If questioning per se made a side weaker, then the justices would learn this and would realize that their questions were generating more heat than light.  Once they realize this, they will know that the only way to get their side to win would be to ask more questions of the other side.

Google determines quality scores by calculating multiple factors, including the relevance of the ad to the specific keyword or keywords, the quality of the landing page the ad is linked to, and, above all, the percentage of times users actually click on a given ad when it appears on a results page. (Other factors, Google won’t even discuss.) There’s also a penalty invoked when the ad quality is too low—in such cases, the company slaps a minimum bid on the advertiser. Google explains that this practice—reviled by many companies affected by it—protects users from being exposed to irrelevant or annoying ads that would sour people on sponsored links in general. Several lawsuits have been filed by would-be advertisers who claim that they are victims of an arbitrary process by a quasi monopoly.

What is the distortion?  One example would be an advertiser who is targeting a very select segment of the market and expects few to click through but expects a lot of money from those that do.  This advertiser is willing to pay a lot but may be excluded on quality score.  So one view is that Google is transferring value from high-paying niche consumers to the rest of the market.

However, for every set of keywords there is another market.  Google would optimally lower the quality penalty on searches using keywords that reveal that the searcher is really looking for a niche product. With this view the quality score is a mechanism for preventing unraveling of an efficient market segmentation.

The article is in Wired and it looks at Hal Varian, chief economist at Google.