You are currently browsing jeff’s articles.

“Bob Dylan drew upon a rich lode of old folk tunes for most of his early songs,” Hyde writes. “That’s not theft; that’s the folk tradition at its best.” It seems that nearly two-thirds of Dylan’s work between 1961-63 — some 50 songs — were reinterpretations of American folk classics. In today’s corporate-creative environment, in which Disney was allowed to change the basic nature of copyright law back in the 90s so that their signature mouse wouldn’t fall into the public domain, Dylan’s early work would’ve landed him in court.

from a post at Mental Floss.  The punchline:

Hyde argues that “there are good reasons to manage scarce resources through market forces, but cultural commons are never by nature scarce, so why enclose them far into the future with the fences of copyright and patent?

I am generally opposed to IP law, but I think this oversimplifies.  There is room for argument about patents.  (For example, I came across this story today about drugs for rare diseases.  It is hard to see how drugs that will benefit a total of 3 people on the whole planet can be financed without monopoly rents.) However, copyright for music and other creative works is a solution to a non-existent incentive problem.

But I am somebody who is very anxious to have the Afghan government and the Pakistani government have the capacity to ensure that those safe havens don’t exist. And so it, I think, will be an important reminder that we have no territorial ambitions in Afghanistan; we don’t have an interest in exploiting the resources of Afghanistan. What we want is simply that people aren’t hanging out in Afghanistan who are plotting to bomb the United States.

Obama said this in an interview with NPR (transcript.)  He actually says “hangin’ out” but the transcriber apparently wanted to maintain an air of formality and wrote “hanging.”  You can hear it here, around the 12:30 mark.  He chuckles a bit when he says it.

These are conspicuoulsy different ways for a President to talk, especially about something as serious as terrorism.  It says something about the man himself and it also draws a sharp contrast with Bush, whose standard catch phrase at these moments would be “rout out the terrorists.”

Previous installment in the series.

In an article about their famous restaurant surveys, Nina and Tim Zagat write

Over the years that we’ve spent surveying hundreds of thousands of diners, one fact becomes clear: Service is *the* weak link in the restaurant industry. How do we know? Roughly 70% of all complaints we receive relate to service. Collectively, complaints about food prices, noise, crowding, smoking, and even parking make up only 30%. Moreover, the average rating for food on our 30-point scale is usually two points higher than the average rating for service. Given the fact that identical people are voting, and that there are hundreds of thousands of them, this deficit is dramatic.

They go on to give some advice to the restaurant industry for improving service.  But don’t these results say that in fact we don’t care about service?  They show that we choose the restaurants with good food despite their bad service.  Sure we complain about the service, other things equal who doesn’t want better service.  But we can live with bad service if we get good food.

Its a standard example of a game that has no Nash equilibrium.  But what exactly are the rules of the game?  How about these:

You have fifteen seconds. Using standard math notation, English words, or both, name a single whole number—not an infinity—on a blank index card. Be precise enough for any reasonable modern mathematician to determine exactly what number you’ve named, by consulting only your card and, if necessary, the published literature.

Hmm… maybe it does have a Nash equilbirium.  But after reading the article (highly recommended), I am still not sure.  I think it comes down to whether or not the players are Turing machines.  (Fez flip: The Browser)

No, not because of this, although it can get rough.

I teach the third course in the first year PhD micro sequence at Northwestern and I also teach my intermediate micro course in the Spring.  I am just finishing up teaching this week and my students will soon be writing their evaluations of me.  They will grade me on a scale of 1 to 6.

Because I am the third and last teacher they will evaluate this year, I face some additional risk that my predecessors did not.  Back in the fall, when they evaluated their first teacher they had only one data point with which to estimate the distribution of teaching ability in the Northwestern economics faculty. An outstanding performance would lead them to revise upward their beliefs and a poor performance would revise their beliefs downward.

As a result, when the students sit down to evaluate their fall professor, even a very good performance will earn at most a 5 because the students, now anticipating higher average performance in the winter and spring, will be inclined to hold that 6 in reserve for the best.  Likewise, very bad performances will have their ratings buoyed by the student’s desire to save the 1 for the worst.

When Spring comes, there is nothing more to learn.  By now they know the distribution and the only thing left to do is to rank their Spring professor relative to those who came earlier.  If he is best he gets a 6, if not he gets at most a 4.  His rating is a mean-preserving spread of the previous ratings.

There is a general principle at work here.  The older you get the more you know about your opportunity costs, the more decisively you act in response to unanticipated opportunities.  (There is a countervailing force which I believe on net makes us more conservative when we get older, but that is the topic of a later post.)

The town I live in is facing a zoning controversy.  An old family-run restaurant on a downtown corner has gone out of business and put the property up for sale.  The high bidder is Dairy Queen.  But the town’s zoning board is set to reject the sale.

At first there doesn’t seem to be any economic rationale for elected representatives of the town stopping what the citizens of the town are evidently voting for with their dollars.  The argument would be that the reason Dairy Queen is the high bidder is that Dairy Queen expects to earn the most in that location.  Since their earnings come from providing a valuable product and service, this must mean that giving the space to Dairy Queen will generate the most value for the citizens of my town.  Why doesn’t the zoning board see this?

Well, they just might be smart enough to see that the simple argument I have given is flawed.  The flaw is that it assumes that Dairy Queen faces the same market conditions as any other bidder for the space.

Bidding for the right to enter a market is determined not by the amount of value the business will create, but the amount of that value that the business gets to keep.  The share of value that the business gets to keep is determined by market conditions.  Generally, businesses that face competition get a smaller share of the value they create than businesses with less competition.

Because of this, unregulated markets for scare commercial real estate will not necessarily lead to an efficient allocation.  A bank may be more valuable to the community and yet lose the bidding to Dairy Queen.  Zoning boards can, in principle, correct this by intervening.

A similar logic is at work in pollution-permit trading markets, although with a twist.  The naive argument is that the social cost of a unit of carbon emissions is the same regardless of who is the emitter, but the benefits vary.  And the benefits will be reflected in the polluters’ willingness to pay for permits.  If we attach a high value the output of producer A, then producer A should be more willing to pay for the right to produce (and therefore pollute) than producer B whose output we value less.

But again this depends on the market conditions.  Producer B might be in a competitive market where, at the margin, it internalizes all of the gains from increased output and Producer A might be a monopolist whose marginal revenue is less than price and therefore internalizes only a fraction of the gains.

(The twist is that pollution rights are divisible and so the appropriate calculation is at the margin which flips the comparison between competitive and monopolistic producers.  Real estate is indivisible (or at least much less divisible) and so average calculations take over.)

This points to an advantage of a carbon tax over a market-based permit system.  A carbon tax can be customized by industry and market conditions.  A permit market treats all polluters equally.

I teach undergraduate intermediate microeconomics, a 10 week course that is the second in a two-part seqeunce at Northwestern University.  I have developed a unique approach to intermediate micro based originally on a course designed by my former colleague Kim-Sau Chung.  The goal is to study the main themes of microeconomics from an institution- and in particular market-free approach.  To illustrate what I mean, when I cover public goods, I do not start by showing the inefficiency of market provided public goods.  Instead I ask what are the possibilities and limitations of any institution for providing public goods.  By doing this I illustrate the basic difficulty without confounding it with the additional problems that come from market provision.  I do similar things with externalities, informational asymmetries, and monopoly.

All of this is done using the tools of dominant-strategy mechanism design.  This enables me to talk about basic economic problems in their purest form.  Once we see the problems posed by the environments mentioned above, we investigate efficiency  in the problem of allocating private goods with no externalities.  A cornerstone of the course is a dominant-strategy version of the Myerson-Satterthwaite theorem which shows the basic friction that any institution must overcome.  We then investigate mechanisms for efficient allocation in large economies and we see that the institutions that achieve this begin to resemble markets.

Only at this stage do markets become the primary lens through which to study microeconomics.  We look at a simple model competition among profit-maximizing auctioneers and a sketch of convergence to competitive equilibrium.  Then we finish with a brief look at general equilibrium in pure exchange economies and the welfare theorems.

There is a minimal amount of game theory, mostly just developing the tools necessary to use mechanism design in dominant strategies, but also a side trip into Nash equilibrium and mixed strategies.

In the coming weeks I will be posting here my lecture notes with a brief introduction to the themes of each.  I am distributing these notes under the Creative Commons attribution, non-commercial, share-alike license.  Briefly, you are free to use these for any non-commercial purpose but you must give credit where credit is due.  And you are free to make any changes you wish, but you must make available your modifications under the same license.

Today I am posting my notes for the first week, on welfare economics.

I begin with welfare economics because I think it is important to address at the very beginning what standard we should be using to evaluate economic institutions.  And students learn a lot from just being confronted with the formal question of what is a sensible welfare standard.  Naturally these lectures build to Arrow’s theorem, first discussing the axioms and motivating them and then stating the impossibility result.  In previous years I would present a proof of Arrow’s theorem but recently I have stopped doing that because it is time consuming and bogs the course down at an early stage.  This is one of the casualties of the quarter system.

Storn White, lifestyle artist.

Like most San Franciscans, Charles Pitts is wired. Mr. Pitts, who is 37 years old, has accounts on Facebook, MySpace and Twitter. He runs an Internet forum on Yahoo, reads news online and keeps in touch with friends via email. The tough part is managing this digital lifestyle from his residence under a highway bridge.

The article is here. Another highlight:

Michael Ross creates his own electricity, with a gas generator perched outside his yellow-and-blue tent. For a year, Mr. Ross has stood guard at a parking lot for construction equipment, under a deal with the owner. Mr. Ross figures he has been homeless for about 15 years, surviving on his Army pension.

Inside the tent, the taciturn 50-year-old has an HP laptop with a 17-inch screen and 320 gigabytes of data storage, as well as four extra hard drives that can hold another 1,000 gigabytes, the equivalent of 200 DVDs. Mr. Ross loves movies. He rents some from Netflix and Blockbuster online and downloads others over an Ethernet connection at the San Francisco public library.

Greg Mankiw is trying to make a reductio ad absurdum critique of the objective of income redistribution.  He has written a paper with Matthew Weinzierl which shows that optimal taxation will typically involve taxing all kinds of characteristics that seem patently unfair and unacceptable.  He concludes from this that it is the goal of income redistribution that entails these absurdities.

But there is a prominent guy who lives at a nice home at 1600 Pennsylvania Avenue who wants to “spread the wealth around.” The moral and political philosophy used to justify such income redistribution is most often a form of Utilitarianism. For example, the work on optimal tax theory by Emmanuel Saez, the most recent winner of the John Bates Clark award, is essentially Utilitarian in its approach.

The point of our paper is this: If you are going to take that philosophy seriously, you have to take all of the implications seriously. And one of those implications is the optimality of taxing height and other exogenous personal characteristics correlated with income-producing abilities.

This argument fails because the objectionable policies implied by optimal taxation in his model have nothing to do with income redistribution or utilitarianism.  Indeed they would be optimal under the weaker and unassailable welfare standard of Pareto efficiency which I would assume Mankiw embraces.

Let me summarize.  Optimal taxation involves minimizing the distortionary effect on output from raising some required level of revenue.  It does not matter what that revenue is being used for.  It could be for redistribution but it could also be for producing public goods that will benefit everyone.  Whatever revenue is required, the optimal taxation policy generates this revenue with minimal cost in terms of reduced incentives for private production.    Taxing exogenous and observable characteristics that are correlated with productivity is a way of generating revenue without distorting incentives.

If we tax income (a direct measure of productivity) you can lower your taxes by earning less, that is a distortion.  If we tax your height (known to be correlated with productivity), you cannot avoid these taxes by making yourself shorter.

So the implication that Mankiw wants us to be uncomfortable with is an implication of the way optimal tax theorists conceive the problem of revenue generation and the implication would be present regardless of how we imagine that tax revenue being spent. It has nothing to do with redistribution and we can feel uncomfortable with height taxation without that making us think twice about our desire to redistribute wealth.

Here is an interesting article about the history of the Ivy league and the member Universities’ attitudes toward sport.

The Ivy is never going to be the Southeastern Conference—and nobody is suggesting it should be. The schools don’t need the exposure of sports to attract students and alumni donations. But some of the league’s alumni complain that the schools offer their students the best of everything, except in this one area. “Why not give them the same opportunities and the same platform in athletics that you do in academics?” says Marcellus Wiley, a former NFL defensive end who played at Columbia in the 1990s. “I think they should revisit everything.”

If we take the objective to be maintaining reputation and attracting donations then there is a broader question.  Why is the  concentration among schools which compete on academic excellence so much higher than among those that compete on athletics?  Competition for dominance in sport appears to be  more costly and occurs at a higher frequency that the competition for academic excellence.  Some possible reasons:

  1. There is more variance in academic talent than in talent in sports.  Thus the top end is thinner and the market is smaller.
  2. There is more continuity in academic strength purely because of numbers.  A bad recruiting class for the basketball team a few years in a row and you are back to square one.  A freshman class at Harvard is large enough that variations wash out.
  3. It is easier to throw money at sport.  One coach makes the whole program.  Assessing the talent of faculty and attracting it with money is more complicated.  And maybe irrelevant.

I would like to believe 1 but I don’t.  I would like not to believe 3 but its hard. I do believe 2.

A post at Language Log explores the use of mathematics in linguistics.  It closes with

Anyhow, my conclusion is that anyone interested in the rational investigation of language ought to learn at least a certain minimum amount of mathematics.

Unfortunately, the current mathematical curriculum (at least in American colleges and universities) is not very helpful in accomplishing this — and in this respect everyone else is just as badly served as linguists are — because it mostly teaches thing that people don’t really need to know, like calculus, while leaving out almost all of the things that they will really be able to use. (In this respect, the role of college calculus seems to me rather like the role of Latin and Greek in 19th-century education:  it’s almost entirely useless to most of the students who are forced to learn it, and its main function is as a social and intellectual gatekeeper, passing through just those students who are willing and able to learn to perform a prescribed set of complex and meaningless rituals.)

Before getting into economics and after getting out of physics, I took calculus and found it very useful and interesting for its own sake.  I do see that the way calculus is taught in the US is geared toward engineers and physicists, but I have a hard time thinking of what mathematics would substitute for calculus in the undergraduate curriculum if the goal was to teach students something useful.  It can’t be analysis or topology.  I took abstract algebra as an undergraduate and found it esoteric and boring.  Discrete mathematics?  OK maybe statistics, but don’t you need integration for that?  Help me out here, if you had the choice, what would you replace calculus with? And remember the goal is to teach something useful.

Via kottke.org, here is the first installment of an Errol Morris essay on Han van Meegeren, the Dutch artist who duped the art world into thinking that his paintings were the work of Vermeer.  Morris concludes with the following

To be sure, the Van Meegeren story raises many, many questions. Among them: what makes a work of art great? Is it the signature of (or attribution to) an acknowledged master? Is it just a name? Or is it a name implying a provenance? With a photograph we may be interested in the photographer but also in what the photograph is of. With a painting this is often turned around, we may be interested in what the painting is of, but we are primarily interested in the question: who made it? Who held a brush to canvas and painted it? Whether it is the work of an acclaimed master like Vermeer or a duplicitous forger like Van Meegeren — we want to know more.

The economics version of this question is why the price of a painting would fall just because it has been discovered to be a forgery by technical means and not because the painting was considered of lesser quality.  And a corollary question.  If you own a painting which is thought by all to be a genuine Vermeer, why would you or anyone invest to find out whether it was a forgery.  There is probably a good answer to this that doesn’t require resorting to the assumption that buyers value the name more than the painting.

The value of a painting is the flow value of having it hang on your wall plus the eventual resale value.  For the truly immortal works of art the flow value is negligible relative to the resale value.  The resale value is linked to the flow value to the person to whom it will be sold to, the person she will sell it to, etc.  Ultimately this means that the price is determined by the sequence of people who have the greatest appreciation for art, since they will be willing to pay the most for the flow value. The existence of just one person in that sequence who is sensitive enough to distinguish a true Vermeer from a Van Meegeren implies a large difference in the prices, even if that person is not alive today and will not be for many generations.

“These are relatively simple physical equations, so you program them into the computer and therefore kind of let the computer animate things for you, using those physics,” said May. “So in every frame of the animation, (the computer can) literally compute the forces acting on those balloons, (so) that they’re buoyant, that their strings are attached, that wind is blowing through them. And based on those forces, we can compute how the balloon should move.”

This process is known as procedural animation, and is described by an algorithm or set of equations, and is in stark contrast to what is known as key frame animation, in which the animators explicitly define the movement of an object or objects in every frame.

Why stop there?  Next, we can use models from the behavioral sciences, program a few equations and let the characters, dialog, and action animate themselves by following the solution of the model.  Don’t believe me? Here’s how to procedurally animate Romeo and Juliet.

Tom Schelling has a famous example illustrating how to solve coordination problems.  Suppose you are supposed to meet someone in New York City but you forgot to specify a location to meet.  This was before the era of cell phones so there is no opportunity for cooperation before you pick a place to go.  Where do you go?  You go where your friend thinks you are most likely to go, which is of course where she thinks you think she thinks you are most likely to go, etc.

Notice that convenience or taste or proximity have no direct bearing on your choice.  These considerations may indirectly influence your choice, but only if she thinks you think she thinks … that they will influence your choice.

There was an old game show called the Newlywed Game where I learned some of my very early training as a game theorist in my living room roughly at the age of 7.  Here is how the show works.  4 pairs of newlyweds were competing.  The husbands, say, would be on stage first, with the wives in an isolated room.  The husbands would be asked a series of questions about their wives, say “What wedding gift from your family does your wife hate the most?” and the husbands would have to guess what the wives would say.  (This was the 70’s so every episode had at least one question about “making whoopee,” like “what movie star would your wife say you best remind her of when you’re makin’ whoopee?”)

When you watch this show every night for as long as I did you soon figure out that the way to win this show is to disregard completely the question and just find something to say that you wife is likely to say, which is of course what she thinks you think she is likely to say, etc.  You could try to make a plan with your newlywed spouse beforehand about what to say, something like the first answer is “the crock pot”, the second answer is “burt reynolds” etc.  But this looks awkward when the first question turns out to be “What is your wife’s favorite room to make whoopee?” etc.

So the problem is just like Schelling’s meeting problem.  The truth is of secondary importance.  You want to find the most obvious answer, i.e. the one your wife is most likely to give because she thinks you are most likely to give it, etc.    For example, if the question is, “Which Smurf will your wife say best describes your style of makin’ whoopee?” then even though you think the answer is probably “Clumsy Smurf” or “Sloppy Smurf”, you say “Hefty Smurf” because that is the obvious answer.

smurfs-hefty-smurf-100x100

Ok, all of this is setup to tell you that Gary Becker is clearly a better game theorist than Steve Levitt.  Via Freakonomics, Levitt tells the story of a Chicago economics faculty Newlywed game played at their annual skit party.  (Northwestern is one of the few top departments that doesn’t have one of these.  That sucks.)  Becker and Levitt were newlyweds.  According to Levitt they did poorly, but it looks like Becker was onto the right strategy, but Levitt was trying to figure out the right answers:

The first question was, “Who is Gary’s favorite economist?” I thought I knew this one for sure. I guessed Milton Friedman. Gary answered Adam Smith. (Although he later apologized to me and said Friedman was the right answer.)

Then they asked, “In Gary’s opinion, how many more quarters will the current recession last?” I guessed he would say three more quarters, but his actual answer was two more quarters.

The next question was, “Who does Gary think will win the next Nobel prize in economics?” This is a hard one, because there are so many reasonable guesses. I figured if Becker writes a blog with Posner, he might think Posner would win the Nobel prize, so that was my answer. Gary said Gene Fama instead.

The last question we got wrong was one that was posed to Gary, asking which of the following three people I would most like to have lunch with: Marilyn Monroe, Napolean, or Karl Marx. I know Gary has a major crush on Marilyn Monroe, so that was the answer I gave, even though the question was about who I would want to have lunch with, not who Gary would want to have lunch with. Gary answered Karl Marx (which makes me wonder what he thinks of me), but did volunteer, as I strongly suspected, that he himself would of course prefer Marilyn to either of the other two.

But close:

You take all of the conflict, all of the chaos, all of the noise, and out of that comes this precise mathematical distribution of the way attacks are ordered in this conflict. This blew our mind. Why should a conflict like Iraq have this as its fundamental signature? Why should there be order in war? We didn’t really understand that. We thought maybe there is something special about Iraq. So we looked at a few more conflicts. We looked at Colombia, we looked at Afghanistan, and we looked at Senegal.

See the TED talk. (hat tip:  The Browser)

The French Open began on Sunday and if you are an avid fan like me the first thing you noticed is that the Tennis Channel has taken a deeper cut of the exclusive cable television broadcast in the United States.  I don’t subscribe to the Tennis channel and until this year they have been only a slight nuisance, taking a few hours here and there and the doubles finals. But as I look over the TV schedule for the next two weeks I see signs of a sea change.

First of all, only the TC had the French Open on Memorial Day, yesterday.  This I think was true last year as well, but now this year all of the early session live coverage for the entire tournament is exclusive on TC.  ESPN2 takes over for the afternoon session and will broadcast early session games on tape.

This got me thinking about the economics of broadcasting rights.  I poked around and discovered in fact that the TC owns all US cable broadcasting rights for the French Open many years to come.  ESPN2 is subleasing those rights from TC for the segments they are airing.  So that is interesting.  Why is TC outbidding ESPN2 for the rights and then selling most of them back?

Two forces are at work here.  First, ESPN2 as a general sports broadcaster has more valuable alternative uses for the air time and so their opportunity cost of airing the French Open is higher.  But of course the other side is that ESPN2 can generate a larger audience just from spillovers and self-advertising than TC so their value for rights to the French Open is higher. One of these effects outweighs the other and so on net the French Open is more valuable to one of these two networks.  Naively we should think that whoever that is would outbid the other and air the tournament.  So what explains this hybrid arrangement?

My answer is that there is uncertainty about the TC’s ability to generate enough audience for a grand slam to make it more valuable for TC than for ESPN.  In face of this TC wants a deal which allows it to experiment on a small scale and find out what it can do but also leaves it the option of selling back the rights if the news is bad.  TC can manufacture such a deal by buying the exclusive rights.  ESPN2 knows its net value for the French Open and will bid that value for the original rights.  And if it loses the bidding it will always be willing to buy those rights at the same price on the secondary market from TC. TC will outbid ESPN2 because the value of the option is at least the resale price and in fact strictly higher if there is a chance that the news is good.

So, the fact that TC has steadily reduced the amount of time it is selling back to ESPN2 suggests that so far the news is looking good and there is a good chance that soon the TC will be the exclusive cable broadcaster for the French Open and maybe even other grand slams.

Bad news for me because in my area the TC is not broadcast in HD and so it is simply not worth the extra cost to subscribe. While we are on the subject, here is my French Open outlook

  1. Federer beat Nadal convincingly in Madrid last week.  I expect them in the final and this could bode well for Federer.
  2. If there is anybody who will spoil that outcome it will be Verdasco who I believe is in Nadal’s half of the draw.  The best match of the tournament will be Nadal-Verdasco if they meet.
  3. The Frenchmen are all fun but they don’t seem to have the staying power.  Andy Murray lost a lot psychologically when he was crowing going into this year’s Australian and lost early.
  4. I always root for Tipsarevich.  And against Roddick.
  5. All of the excitement on the women’s side from the past few years seems to have completely disappeared with the retirement of Henin, the injury to Sharapova and the meltdown of the Serbs.  I expect a Williams-Williams yawner.

Turns out that a good way to predict how the US Supreme Court will rule is by counting the number of questions asked to either side.  The winning side will be the one with the fewest questions asked.  Is this because

  1. the justices have made up their minds already and ask more questions of the losing side, or
  2. more questions put the lawyer on the defensive, weakening his position?

That is, does outcome cause the questions or the other way around?  I think it has to be the fomer, indeed the latter eventually implies the former.  If questioning per se made a side weaker, then the justices would learn this and would realize that their questions were generating more heat than light.  Once they realize this, they will know that the only way to get their side to win would be to ask more questions of the other side.

Google determines quality scores by calculating multiple factors, including the relevance of the ad to the specific keyword or keywords, the quality of the landing page the ad is linked to, and, above all, the percentage of times users actually click on a given ad when it appears on a results page. (Other factors, Google won’t even discuss.) There’s also a penalty invoked when the ad quality is too low—in such cases, the company slaps a minimum bid on the advertiser. Google explains that this practice—reviled by many companies affected by it—protects users from being exposed to irrelevant or annoying ads that would sour people on sponsored links in general. Several lawsuits have been filed by would-be advertisers who claim that they are victims of an arbitrary process by a quasi monopoly.

What is the distortion?  One example would be an advertiser who is targeting a very select segment of the market and expects few to click through but expects a lot of money from those that do.  This advertiser is willing to pay a lot but may be excluded on quality score.  So one view is that Google is transferring value from high-paying niche consumers to the rest of the market.

However, for every set of keywords there is another market.  Google would optimally lower the quality penalty on searches using keywords that reveal that the searcher is really looking for a niche product. With this view the quality score is a mechanism for preventing unraveling of an efficient market segmentation.

The article is in Wired and it looks at Hal Varian, chief economist at Google.

Here is an article (via MindHacks) profiling the types of people who are attracted to conspiracy theories.

It is the domain of psychology to study the specific conspiracy theories that appear and the people who advocate them, but to a game theorist the prevalence of conspiracy theories is not surprising.  They fill a credibility gap.  Like nature, the truth abhors a vacuum.  It cannot be an equilibrium that only the truth is told and retold.  Because then we would learn to believe everything we hear.  That would be exploited by people trying to take advantage.

Conspiracy theories are just one example of noise that must be present in equilibrium to ensure that we don’t believe everything we hear.  And arguably conspiracy theories about events that have already happened or are beyond our control are the cost-minimizing way of moderating credibility.  Nobody really gets harmed.

Psyblog has a rundown of 18 failures of the brain’s system of attention.  My favorite:

9. Ironic processes of control

In fact sometimes attention is a real bear. What about when you really want to get something right, like putt the ball, hit a beautiful serve right in the corner or reverse the car into a narrow space? Naturally you concentrate even harder than normal, really focus. Unfortunately that just seems to make things worse: you miss the putt by a mile, frame the ball 50ft in the air and ding the car. What gives? These are what Wegner et al. (1998) call ‘ironic processes of control’. Sometimes too much attention is just as detrimental as too little.

I normaly strive to pay as little attention as possible.

I don’t think this is the right way to make it.

Linguine and clams is one of those dishes where insistence on simplicity is rewarded.  The basic mechanics of the dish are extremely straightforward.  There are just a few little secrets that elevate a good dish to an excellent one.

Obviously you want fresh clams.  You don’t want fresh pasta.  Dried pasta works much better here, more on that later.  You want garlic, parsley is a nice touch, olive oil, butter and white wine.  That’s it. (Bacon?? Sorry Sandeep, no.) Maybe red chile flakes.

Start boiling the pasta.  Set the timer for 1 minute short of al dente.  Heat a flat pan which can be covered.  When hot add a touch of olive oil and chopped garlic (If you like the red chile flakes here is where you add it.)  When the garlic is soft but nowhere near brown place the clams in the pan and cover.  After about a minute throw splash in some white wine and cover again.  After another minute you should have open clams and a lot of clam juice in the pan.  Remove the clams to a bowl and reduce the heat on the pan.

Look in the pan.  This is your sauce.  You will soon be placing pasta in this sauce and you want the pasta to be covered but not swimming, so you are adjusting the heat so that it reduces to that target when your timer goes off.  Just before it does, swirl in butter to enrich the sauce.

When the timer goes off you have pasta which is not quite cooked.  That is what you want.  Use tongs and directly lift the pasta out of the boiling water and into the sauce.  There is a reason for this.  The pasta water has starch in it from the pasta and some of this water will come along with the pasta into the sauce.  The starch tastes good and it will help give body to the sauce.  Also, the water will increase the volume of the sauce and you want this because you are going to cook the pasta for its remaining minute in the sauce.  Pasta is a sponge at this point of the cooking process (think of what your pasta looks like when you have leftovers:  swollen and soft.  It has soaked up everything around it.)  The pasta will soak up the juices in its last minute of cooking.  Fresh pasta will not do this.

When this is done, throw in your parsley, toss and plate the pasta.  Arrange the clams on top and serve.  Provide bread and fork.  Beer is best.

Here is a nice article (via The Browser) theorizing about why Wikipedia works.  The apparent puzzles are why people contribute to this public good and why it withstands vandalism and subversion.  The first puzzle is no longer a puzzle at all, even us economists now accept that people freely give away their information and effort all the time.  But no doubt others have just as much motivation, or more, to vandalize and distort, hence the second puzzle.

The article focuses mostly on the incentives to police which is the main reason articles on say, Barack Obama, probably remain neutral and factual most of the time.  But Wikipedia would not be important if it were just about focal topics that we already have dozens of other sources on.  The real reason Wikipedia is as valuable a resource as it is stems from the 99.999% of articles that are on obscure topics that only Wikipedia covers.

For example, Barbicide.

These articles don’t get enough eyeballs for policing to work, so how does Wikipedia solve puzzle number two in these cases?  The answer is simple:  a vandal has to know that, say, John I of Trebizond exists to know that there is a page about him on Wikipedia that is waiting there to be vandalized.  (I just vandalized it, can you see where?)

There are only two classes of people who know that there exists a John I of Trebizond (up until this moment that is.)  Namely, people who know something useful about him and people who want to know something useful about him.  So puzzle number 2 is elegantly sidestepped by epistemological constraints.

Modern classical music, especially, is really hard.  What the hell are you listening too, this endlessly winding dissonant stuff without much melody?

The only way to get this kind of music in your ear is to listen to it over and over, which is what I have been doing with the first movement, “Prelude,” of the Maw Violin Concerto the last couple of days.  It doesn’t matter whether I want to hear it again or not:  I just play it again when it’s done.

When I cycle a piece of thorny orchestral music this way the fog slowly lifts, the picture clears, figure and ground separate.  Past pieces I’ve placed on endless loop have included Ligeti’s Melodien, Birtwistle’s The Triumph of Time, Lieberson’s Piano Concerto, and Schuller’s Of Reminiscences and Reflections. Initially they were all daunting listens but now they are old friends.  In every case I have learned to understand the composer’s acerbic language much better, so that new experiences with their other pieces aren’t as hard.

That’s Ethan Iverson who, in addition to being the piano player for the frontier jazz trio The Bad Plus, writes an outstanding blog, Do The MathThis post clarified a lot for me.  Iverson is a broad, open-minded, and gifted musician and even he approaches contemporary classical music the same way my PhD students approach the Revenue Equivalence Thereom.  I have tried exactly what he describes here for Ligeti, etc.  and I have yet to turn that corner.

OK so I am apparently obsessed with this theme, but I guess that is what makes me a blogger.

Research, like a lot of collaborative activities, encourages specialization.  Successful co-authorships often combine people with differentiated skills.  So successful co-authors are complementary which means that your co-author’s other co-authors are substitutes for you.   This should imply that you are less likely, other things equal, to have a successful co-authorship with your co-author’s co-authors than with, say a randomly selected collaborator.

If we tried to look for evidence of this in data the difficulty would be in holding other things equal.  You are more likely to talk to and have other things in common with your co-author’s co-author than with a random researcher so this would have to be controlled for.

These issues make me think there is some really interesting research waiting to be done taking data from social networks, like patterns of co-authorship or frienship relations on Facebook and trying to simultaneously identify (in the formal sense of that word) “types”  (e.g. technician vs idea-man)  and preferences (e.g. whether these types are complements or substitutes.)  The really interesting part of this must be the econometric theory saying what are the limits of what can and cannot be identified.

In chess, one way a game can be declared a draw is if black, say, has no legal move.  This is called stalemate.  Typically stalemate occurs because white has a material advantage but fails to checkmate and instead leaves the black king with no space to move that does not walk into check.  It is illegal to place your own king in check.

The reason stalemate is an artificial rule is that check is an artificial rule.  Clearly the object in chess is to conquer the opponent’s king. One can imagine that check evolved as a way to prevent dishonorable defeat when you overlook a threat against your king and allow it to be captured even though it could have escaped.  To prevent this, if your king is in check the rules of chess require that you escape from check on the next move and it is illegal to move into check.  This rule means that the only way to win is to checkmate:  place your opponent in a position where his king is threatened and cannot escape the threat.  The game ends there because on the very next move the king will certainly be captured.

This gives rise to stalemate:  it is only because of check that a player can have no legal move.  If we dispensed with checkmate, replacing it with the more transparent and natural objective of capturing the king, and eliminating the requirement that you cannot end your turn in check, then a player would always have a legal move.  (it is easy to prove this.)  Thus, no stalemate.

You can learn a lot about who loves you by walking around with food on your face.

Should you tell someone when they have food on their face?  You will embarrass them but you will spare them embarrassment later.  The embarrassment comes from common knowledge. He knows that you know, etc. that he had food on his face.  You would escape this if you could alert him about the food without him knowing it was you.

You could wait and expect that the food will fall.  But you run the risk that it won’t and he’ll discover the food and realize that you let him walk around with food on his face.  And once you wait for a bit you are committed.  You can’t very well tell him after the meal is over.  “You mean you sat there talking to me the whole time with sauce on my chin?”

And what happens when you are in a group and one guy has food on his face?  Whose going to tell?  Whoever is the first to talk proves that everyone else was willing to ignore it.

Bottom line:  if you are dining with me and I have food on my face, send me a text message.

Sandeep has previously blogged about the problems with torture as a mechanism for extracting information from the unwilling. As with any incentive mechanism, torture works by promising a reward in exchange for information.  In the case of torture, the “reward” is no more torture.

Sandeep focused on one problem with this.  This works only if the torturer will actually carry out his promise to stop torturing once the information is given.  But once the information is given the torturer now knows he has a real terrorist and in fact a terrorist with valuable information.  This will lead to more torture (for more information) not less.  Unless the torturers have some way to tie their hands and stop torturing after a few tidbits of information, the captive soon figures out that there is no incentive to talk and stops talking. A well-trained terrorist knows this from the beginning and never talks.

Let me point out yet another problem with torture.  This one cannot be solved even by enabling the torturers to commit to an incentive scheme.

The very nature of an incentive scheme is that it treats different people differently.    To be effective, torture has to treat the innocent different than the guilty.  But not in the way you might guess.

Before we commence torturing we don’t know in advance what information the captive has, and indeed we don’t know for sure that he is a terrorist at all, even though we might be pretty confident.    A captive who really has no information at all is not going to talk.   Or if he does he is not going to give any valuable information, no matter how much he would like to squeal and stop the torture.

And of course the true terrorist knows that we don’t know for sure that he is a terrorist. He would like to pretend that he has no information in hopes that we will conclude he is innocnent and stop torturing him.  Therefore the torture must ensure that the captive, if he is indeed an informed terrorist, won’t get away with this.  With torture as the incentive mechanism, the only way to do this is to commit to torture for an unbearably long time if the captive doesn’t talk.

And this leads us to the problem.  In the face of this, the truly informed terrorist begins talking right away in order to avoid the torture.  The truly innocent captive cannot do that no matter how much he would like to.  And so torture, if it is effective at all, necessarily inflicts unbearable suffering on the innocent and very little suffering on the actual terrorists.

  1. Next there will be scam-baiter-baiters, etc.
  2. Psychological time travel.  Must have something to do with this.
  3. Jazz and brain chemistry.

Let’s try a little (thought) experiment in verbal short-term memory. First, find a friend. Then, find a reasonably complex sentence about 45 words long …Now call your friend up on the phone, and have a discussion about the topic of the article. In the course of this conversation, slip in a verbatim performance of the selected sentence. Then ask your friend to write an essay on the topic of the discussion. … How likely is it that the selected sentence will find its way, word for word, into your friend’s essay?

In case you haven’t guessed, the question is rhetorical and the article (from LanguageLog, a great blog) is referring to Maureen Dowd’s plagiarism.  It is a fallacy though to focus only on the probability of the scenario you are trying to reject.  What matters is the relative probability of that scenario with the alternative scenario, namely that Maureen Dowd would bother (intentionally) lifting word for word a paragraph which is not particularly insightful or cleverly written from a popular blog at the risk of being called a plagiarizer.

When something happens that has two very unlikely explanations, picking one of those explanations is mostly driven by your priors.