You are currently browsing the tag archive for the ‘economics’ tag.

Made it to Brooklyn alive. I don’t see what the big deal is, some nice chap shoveled me a spot and even gave me a free chair!

From @TheWordAt.

Speaking of which, have you noticed the similarity between shovel-earned parking dibs and intellectual property law?  In both cases the incentive to create value is in-kind:  you get monopoly power over your creation.  The theory is that you should be rewarded in proportion to the value of the thing you create.  It’s impossible to objectively measure that and compensate you with cash so an elegant second-best solution is to just give it to you.

At least in theory.  But in both IP and parking dibs there is no way to net out the private benefit you would have earned anyway even in the absence of protection.  (Aren’t most people shoveling spaces because otherwise they wouldn’t have any place to put their car in the first instance? Isn’t that already enough incentive?)  And all of the social benefits are squandered anyway due to fighting ex post over property rights.

I wonder how many people who save parking spaces with chairs are also software/music pirates?

Finally, here is a free, open-source Industrial Organization textbook (dcd: marciano.)  This guy did a lot of digging and we all get to recline in his chair.

Jonathan Weinstein does a very good Dickens.  A fun read.

“There are many other purposes of charity, Uncle, but at the risk of my immortal soul, I shall debate you on your own coldhearted terms. Your logic concerning gifts appears infallible, but you have made what my dear old professor of economic philosophy would call an implicit assumption, and a most unwarranted one.”

You can find it here, thanks to a reader Elisa for hunting it down. The core is paragraphs 43-112 (starting on page 27) which lay out the new rules. I will give some excerpts and my own commentary.

The regulations break down into 4 categories: transparency, no blocking, no unreasonable discrimination, and reasonable network management. Transparency is what it sounds like: providers are required to maintain and make available data on how they are managing their networks. The blocking and discrimination rules are the most important and the ones I will focus on.

No Blocking.

A person engaged in the provision of fixed broadband Internet access service, insofar as such person is so engaged, shall not block lawful content, applications, services, or non- harmful devices, subject to reasonable network management. (paragraph 63)

This is the clearest statement in the entire document. (Many phrases are qualified by the “reasonable network management” toss-off.  In the abstract that could be a troubling grey area, but it is pretty well clarified in later sections and appears to be mostly benign, although see one exception I discuss below.)  The no-blocking rule is elaborated in various ways:  providers cannot restrict users from connecting compatible devices to the network, degrading particular content or devices is equivalent to blocking and not permitted, and especially noteworthy:

Some concerns have been expressed that broadband providers may seek to charge edge providers simply for delivering traffic to or carrying traffic from the broadband provider’s end-user customers. To the extent that a content, application, or service provider could avoid being blocked only by paying a fee, charging such a fee would not be permissible under these rules. (paragraph 67)

No Unreasonable Discrimination

A person engaged in the provision of fixed broadband Internet access service, insofar as such person is so engaged, shall not unreasonably discriminate in transmitting lawful network traffic over a consumer’s broadband Internet access service. Reasonable network management shall not constitute unreasonable discrimination.

This rule is heavily qualified in the paragraphs that follow.  Here is my framework for reading these.  There are three typical ways a provider would discriminate:  differentially pricing various services (i.e. you pay differently whether you are accessing Facebook or YouTube), differentially pricing by quantity (i.e. the first MB costs more or less than the last), or differentially pricing by bandwidth (i.e. holding fixed the quantity you pay more if you want it sent to you faster, for example by watching HD video.)

The rules seem to consider some of these forms of discrimination unreasonable but others reasonable.  The clearest prohibition is against the first form of discrimination, by data type.

For a number of reasons, including those discussed above in Part II.B, a commercial arrangement between a broadband provider and a third party to directly or indirectly favor some traffic over other traffic in the broadband Internet access service connection to a subscriber of the broadband provider (i.e., “pay for priority”) would raise significant cause for concern. (paragraph 76)

Such a ban is clearly dictated by economic efficiency.  The cost of transmitting a datagram is independent of the content it contains and therefore efficient pricing should treat all content equally on a per-datagram basis.  This principle is the hardest to dispute and the FCC has correspondingly taken the clearest stand on it.

As for quantity-based discrimination:

We are, of course, always concerned about anti-consumer or anticompetitive practices, and we remain so here. However, prohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks. The framework we adopt today does not prevent broadband providers from asking subscribers who use the network less to pay less, and subscribers who use the network more to pay more.  (paragraph 72)

So tiered service by quantity is permitted.  Note that the wording given above is off the mark in terms of what efficiency dictates.  It is not quantity per se that should be priced but rather congestion.  A toll-road is a useful metaphor.  From the point of view of efficiency, the purpose of a toll is to convey to drivers the social cost of their use of the road.  When drivers must pay this social cost, they are induced to make the efficient decision whether to use the road by comparing it to their their private benefit.

The social cost is zero when traffic is flowing freely (no congestion) because they don’t slow anybody else down. So tolls should be zero during these periods.  Tolls are positive only when the road is utilized at capacity and additional drivers reduce the value of the road to others.

So “lighter users subsidizing heavier users” sounds unfair but its really orthogonal to the principles of efficient network management.  In an efficiently priced network the off-peak users are subsidized by the peak-users regardless of their total amount of usage.  And this is how it should be not because of anything having to do with fairness but because of incentives for efficient usage.

There is one big problem with this toll-road metaphor when it comes to the Internet however.  The whole point of peak-pricing is to signal to drivers that its costly now to drive.  But when you are downloading content from the Internet things are happening too fast for you to respond to up-to-the-second changes in congestion.  It is just not practical to have prices adjust in real time to changing network conditions as dictated by peak-load pricing.  And without users being able to respond to congestion pricing their purpose would not be served by calculating prices ex post and sending users the bill at the end of the month.

Given this, it could be argued that a reasonable proxy is to charge users by their total usage.  It’s a reasonable approximation that those with greater total usage are also most likely to be imposing greater congestion on others.  And the FCC rules permit this.  (Note that in particular, what is implied by tiered pricing as a proxy for congestion pricing is not a quantity discount but in fact a quantity surcharge. The per-datagram price is larger for heavier users.)

Discrimination by bandwidth is not directly addressed.  It is therefore implicitly allowed because paragraph 73 reads “Differential treatment of traffic that does not discriminate among specific uses of the network or classes of uses is likely reasonable. For example, during periods of congestion a broadband provider could provide more bandwidth to subscribers that have used the network less over some preceding period of time than to heavier users.”

But the following paragraph comes from the section on Network Management.

Network Congestion. Numerous commenters support permitting the use of reasonable network management practices to address the effects of congestion, and we agree that congestion management may be a legitimate network management purpose. For example, broadband providers may need to take reasonable steps to ensure that heavy users do not crowd out others. What constitutes congestion and what measures are reasonable to address it may vary depending on the technology platform for a particular broadband Internet access service. For example, if cable modem subscribers in a particular neighborhood are experiencing congestion, it may be reasonable for a broadband provider to temporarily limit the bandwidth available to individual end users in that neighborhood who are using a substantially disproportionate amount of bandwidth. (paragraph 91)

At face value it gives well-intentioned providers the ability to manage congestion.  But there doesn’t seem to be a clear statement about how this ability can be integrated with pricing.  Can providers sell “managed” service at a discount relative to “premium” service?  One re-assuring passage emphasizes that network management practices must be consistent with the no-discrimination-by-data-type mandate.  So for example, congestion caused by high-bandwidth video must be managed equally whether it was from YouTube or Comcast’s own provided video services.

Finally, the rules permit what’s called “end-user controlled” discrimination, i.e. 2nd degree price-discrimination.  This means that broadband providers are permitted to offer an array of pricing plans from which users select.

Maximizing end-user control is a policy goal Congress recognized in Section 230(b) of the Communications Act, and end-user choice and control are touchstones in evaluating the reasonableness of discrimination.215 As one commenter observes, “letting users choose how they want to use the network enables them to use the Internet in a way that creates more value for them (and for society) than if network providers made this choice,”and “is an important part of the mechanism that produces innovation under uncertainty.”216 Thus, enabling end users to choose among different broadband offerings based on such factors as assured data rates and reliability, or to select quality-of-service enhancements on their own connections for traffic of their choosing, would be unlikely to violate the no unreasonable discrimination rule, provided the broadband provider’s offerings were fully disclosed and were not harmful to competition or end users.

While this paints a too-rosy picture of the consumer-welfare effects of 2nd degree price-discrimination (it typically makes some consumers worse off and can easily make all consumers worse off) it seems hard to imagine how you can allow the kind of tiered pricing already discussed and not allow consumers to choose among plans.

So the FCC is allowing broadband providers to rollout metered service, possibly with quantity premiums, and there is a grey area when it comes to bandwidth restrictions.  These are consistent with, but not implied by efficient pricing, and of course we are putting them in the hands of monopolists, not social planners.  They certainly fall short of what net-neutrality hawks were asking for but it was wishful thinking to imagine that these changes were not coming.

I think that the no-blocking and no unreasonable discrimination rules are the core of net-neutrality as an economic principle and getting these is more than sufficient compensation for tiered pricing.

Final disclaimer:  everything above applies to “fixed broadband providers” like cable or satellite.  The FCC’s approach to mobile broadband can be summarized as “wait-and-see.”

Moving us one step closer to a centralized interview process (a good thing as I have argued), the Duke department of economics is posting video clips of job talks given by their new PhD candidates.  Here is the Duke Economics YouTube Channel, and here is the talk of Eliot Annenberg (former NU undergrad and student of mine btw.)  I expect more and more departments to be doing this in the future. (Bearskin bend: Econjeff)

While we are on the subject here is a recent paper that studies the Economics academic labor market (beyond the rookie market.)  The abstract:

In this paper we study empirically the labor market of economists. We look at the mobility and promotion patterns of a sample of 1,000 top economists over thirty years and link it to their productivity and other personal characteristics. We find that the probability of promotion and of upward mobility is positively related to past production. However, the sensitivity of promotion and mobility to production diminishes with experience, indicating the presence of a learning process. We also find evidence that economists respond to incentives. They tend to exert more effort at the beginning of their career when dynamic incentives are important. This finding is robust to the introduction of tenure, which has an additional negative ex post impact on production. Our results indicate therefore that both promotions and tenure have an effect on the provision of incentives. Finally, we detect evidence of a sorting process, as the more productive individuals are allocated to the best ranked universities. We provide a very simple theoretical explanation of these results based on Holmström (1982) with heterogeneous firms.

via eric barker.

Today the commissioners of the FCC will meet to vote on a new proposed policy concerning Net Neutrality.  It is expected to pass.  Pundits, policymakers and media of all predispositions are hyperventilating over the proposal but none link to it and I can’t find the actual document anywhere.  Does anybody have a link to it?

To use the justice system most effectively to stop leaks you have to make two decisions.

First, you have to decide what will be a basis for punishment. In the case of a leak you have essentially two signals you could use. You know that classified documents are circulating in public, and you know which parties are publishing the classified documents. The distinctive feature of the crime of leaking is that once the documents have been leaked you already know exactly who will be publishing them: The New York Times and Wikileaks. Regardless of who was the original leaker and how they pulled it off.

That is, the signal that these entities are publishing classified documents is no more informative about the details of the crime than the more basic fact that the documents have been leaked. It provides no additional incentive benefit to use a redundant signal as a basis for punishment.

Next you have to decide who to punish. Part of what matters here is how sensitive that signal is to given actor’s efforts. Now the willingness of Wikileaks and The New York Times to republish sensitive documents certainly provides a motive to leakers and makes leaks more likely. But what also matters is the incentive-bang for your punishment-buck and to deter all possible outlets from mirroring leaks would be extremely costly. (Notwithstanding Joe Lieberman.)

A far more effective strategy is to load incentives on the single agent whose efforts have the largest effect on whether or not a leak occurs: the guy who was supposed to keep them protected in the first place. Because when a leak occurs, in addition to telling you that some unknown and costly to track person spent too much effort trying to steal documents, it tells you that your agent in charge of keeping them secret didn’t spend enough effort doing the job you hired him to do.

You should reserve 100% of your scarce punishment resources where they will do the most good, incentivizing him (or her.)

(Based on a conversation with Sandeep.)

Update: The Australian Government seems to agree. (cossack click:  Sandeep)

For 4.6 billion years, the Sun has provided free energy, light, and warmth to Earth, and no one ever realized what a huge moneymaking opportunity is going to waste. Well, at long last, the Sun is finally under new ownership.

Angeles Duran, a woman from the Spanish region of Galicia, is the new proud owner of the Sun. She says she got the idea in September when she read about an American man registering his ownership of the Moon and most of the planets in the Solar System – in other words, all the celestial bodies that don’t actually do anything for us.

Duran, on the other hand, snapped up the solar system’s powerhouse, and all it cost her was a trip down to the local notary public to register her claim. She says that she has every right do this within international law, which only forbids countries from claiming planets or stars, not individuals:

“There was no snag, I backed my claim legally, I am not stupid, I know the law. I did it but anyone else could have done it, it simply occurred to me first.”

She will soon begin charging for use.  I advise her to hire a good consultant because pricing The Sun is not your run-of-the-mill profit maximization exercise. First of all, The Sun is a public good.  No individual Earthling’s willingness to pay incorporates the total social value created by his purchase.  So it’s going to be hard to capitalize on the true market value of your product even if you could get 100% market share.

Even worse, its a non-excludable public good.  Which means you have to cope with a massive free-rider problem.  As long as one of us pays for it, you turn it on, we all get to use it.  So if you just set a price for The Sun, forget about market share, at most your gonna sell to just one of us.

You have to use a more sophisticated mechanism.  Essentially you make the people of Earth play a game in which they all pledge individual contributions and you commit not to turn on The Sun unless the total pledge exceeds some minimum level.  You are trying to make each individual feel as if his pledge has a chance of being pivotal:  if he doesn’t contribute today then The Sun doesn’t rise tomorrow.

A mechanism like that will do better than just hanging a simple price tag on The Sun but don’t expect a windfall even from the best possible mechanism.  Mailath and Postlewaite showed, essentially, that the maximum per-capita revenue you can earn from selling The Sun converges to zero as the population increases due to the ever-worsening free-rider problem.

You might want to start looking around for other planets in need of a yellow dwarf and try to generate a little more competition.

(Actual research comment:  Mailath and Postlewaite consider efficient public good provision.  I am not aware of any characterization of the profit-maximizing mechanism for a fixed population size and zero marginal production cost.)

[drawing:  Move Mountains from http://www.f1me.net]

Today Qatar was the surprise winner in the bid to host the FIFA World Cup in 2022, beating Japan, The United States, Australia, and Korea.  It’s an interesting procedure by which the host is decided consisting of multiple rounds of elimination voting.  22 judges cast ballots in a first round.  If no bidder wins a majority of votes then the country with the fewest votes is eliminated and a second round of voting commences.  Voting continues in this way for as many rounds as it takes to produce a majority winner.  (It’s not clear to me what happens if there is a tie in the final round.)

Every voting system has its own weaknesses, but this one is especially problematic giving strong incentives for strategic voting.  Think about how you would vote in an early round when it is unlikely that a majority will be secured. Then, if it matters at all, your vote determines who will be eliminated, not who will win.   If you are confident that your preferred site will survive the first round, then you should not vote truthfully.  Instead you should to keep bids alive that will easier to beat in later rounds.

Can we look at the voting data and identify strategic voting?  As a simple test we could look at revealed preference violations.  For example, if Japan survives round one and a voter switches his vote from Japan to another bidder in round two, then we know that he is voting against his preference in either round one or two.

But that bundles together two distinct types of strategic voting, one more benign than the other.  For if Japan garners only a few votes in the first round but survives, then a true Japan supporter might strategically abandon Japan as a viable candidate and start voting, honestly, for her second choice.  Indeed, that is what seems to have happened after round one.  Here are the data.

We have only vote totals so we can spot strategic voting only if the switches result in a net loss of votes for a surviving candidate.  This happened to Japan but probably for the reasons given above.

The more suspicious switch is the loss of one vote for the round one leader Qatar. One possibility is that a Qatar supporter , seeing Qatar’s survival to round three secured, cast a strategic vote  in round two to choose among the other survivors. But the more likely scenario in my opinion is a strategic vote for Qatar in round one by a voter who, upon learning from the round 1 votes that Qatar was in fact a contender, switched back to voting honestly.

And sell them in January to take advantage of the January effect: the predictable increase in stock prices from December to January. Many explanations have been, the most prominent being a tax-motivated sell-off in December by investors trying to realize a capital loss before the end of the year. But here is a paper that demonstrates a large and significant january effect in simple laboratory auctions. Two identical auctions were conducted, one in December and one in January and the bidding was significantly higher in January.

In the first experimental test of the January effect, we find an economically large and statistically significant effect in two very different auction environments. Further, the experiments spanned three different calendar years, with one pair of auctions conducted in December 2003 and January 2004 and another pair of auctions conducted in December 2004 and January 2005. Even after controlling for a wide variety of auxiliary effects, we find the same result. The January effect is present in laboratory auctions, and the most plausible explanation is a psychological effect that makes people willing to pay higher prices in January than in December.

Sombrero swipe: Barking Up The Wrong Tree.

Responding to the flap about the Pope’s new stance on condom use by male protsitutes, Rev. Joseph Fessio, editor in chief of Ignatius Press which published the book in which the Pope is quoted provides this clarification:

But let me give you a pretty simple example. Let’s suppose we’ve got a bunch of muggers who like to use steel pipes when they mug people. But some muggers say, gosh, you know, we don’t need to hurt them that badly to rob them. Let’s put foam pads on our pipes. Then we’ll just stun them for a while, rob them and go away. So if the pope then said, well, yes, I think that using padded pipes is actually a little step in a moral direction there, that doesn’t mean he’s justifying using padded pipes to mug people. He’s just saying, well, they did something terrible, but while they were doing that, they had a little flicker of conscience there that led them in the right direction. That may grow further, so they stop mugging people completely.

Side topic:  is the Catholic Church revealing that sin is a problem of moral hazard or adverse selection?

It has been argued that earmarks can’t have any effect on the level of goverment spending because earmarks only specify how already-budgeted spending will be allocated.  Earmarks don’t represent additional funding, they simply dictate how an agency spends its funds.

But this assumes that the ability to earmark spending doesn’t change the incentives to spend in the first place. Here are two arguments why they must.

  1. Earmarks raise spending. Suppose you are deciding on your grocery budget but your wife is going to do the shopping.  Whatever you budget, she is going to spend it to maximize her own utility function which is not the same as yours.  Your marginal return for every dollar spent is smaller than if you were doing the spending.  Your grocery budget is determined by the condition that the marginal return on groceries is equal to the marginal return on Ahmad Jamal concerts at the Regatta Bar in Cambridge (last week:  awesome.)  This happens at a lower level of spending when your wife has discretion than when you do.  If you could “earmark” the grocery spending you would get exactly what you would buy if it were you doing the shopping.  So the earmarks raise spending.
  2. Earmarks Reduce Spending. Legislation is not decided by a single agent who is maximizing a single utility function.  Legislators have to be bribed to get their support.  Suppose that a crucial supporter wants a road built in his state.  With an earmark this can be achieved directly.  Without an earmark, you have to appropriate general infrastructure spending and hope that he gets treated well by the agency. For the same reason as above, the marginal value to your colleague of a dollar allocated to general infrastructure is lower than if the spending were earmarked.  So you have to increase the budget in order to achieve the bribe he requires.

Last Tuesday, everyone’s favorite mad-scientist-laboratory of Democracy, the San Francisco City Council enacted a law banning the Happy Meal. Officially what is banned is the bundling of toys with fast food. The theory seems to be that toys are a cheap substitute for quality food and that a prohibition on bundling will force McDonald’s to compete instead on the quality of its food.

But it’s not easy to lay out a coherent theory of the Happy Meal. You could try a bargaining story. Kids like toys, parents want healthy food but are willing to compromise if the kids put up enough of a fuss. McDonald’s offers that compromise in the form of cheap toys and crappy food, raking in their deadweight loss (!).

You could try a story based on 2nd degree price discrimination. There are parents who care more about healthy food (Chicken Nuggets??) and parents who care less. A standard form of price discrimination has a higher end item for the first group and a low-end item for the second. The low-end item fetches a low price because it is purposefully inferior. But if toys are a perfect substitute for healthy food in the eyes of the health-indifferent parents, then a Happy Meal raises their willingness to pay without attracting the health-conscious (and toy-indifferent) parents.

You could even spin a story that suggests that the SF City Council’s plan may backfire. That’s what Josh Gans came up with in his post at the Harvard Business Review Blog.

For a parent, this market state of affairs spells opportunity. With McDonald’s offering a toy instead of additional bad stuff, the parent can ‘sell’ this option to their children and get them to eat less bad stuff than they would at another chain. The toy is a boon if the parents are more concerned about the bad stuff than having another junky toy in the house … They allow a parent to increase the value of healthier products in the eyes of children and negotiate a better price (perhaps in the form of better food at home) for allowing their children to have them. Happy Meals do have carrots after all.

But all of these stories have the same flaw:  McDonald’s can still achieve exactly the same outcome by unbundling the Happy Meal, selling toys a’la carte alongside the Now-Only-Somewhat-Bemused Meals they used to share a cardboard box with.  Just as before families will settle their bargains by re-assembling the bundle, health sub-conscious families will buy the low-end burger and pair it with toys, and parents who have to bribe their kids will buy McDonald’s exclusive movie-tie-in toys to get them to eat their carrots.

(Yes I am aware of the Adams-Yellen result that bundling can raise profits, but this has nothing to do with toys and healthy food specifically.  Indeed McAfee, McMillan and Whinston show that generically the Adams-Yellen logic implies that some form of bunding is optimal.  So this cannot be the relevant story for McDonalds which is otherwise a’la carte.)

So I don’t think that economic theory by itself has a lot to say about the consequences of the Exiled Meal.  The one thing we can say is that McDonald’s doesn’t want to be forced to unbundle.  Putting constraints like that on a monopolist can sometimes improve consumer welfare and sometimes reduce it.  It all depends on whether you think McDonald’s increases its share of the surplus by lowering the total or raising it.  The SF City Council, like most of us one way or the other, probably had formed an opinion on that question already.

We have a new guest-blogger:  Roger Myerson.

Roger is a game theorist but his work is known to everyone – theorist or otherwise – who has done graduate work in economics.  If an economist from the late nineteenth century, like Edgeworth, or early twentieth century, like Marshall, wakes up and asks, “What’s new in economics since my time?”, I guess one answer is, “Information Economics”.

Is the investment bank trying to sell me a security that it is trying to dump or is it a good investment?  Is a bank’s employee screening borrowers carefully before he makes mortgage loans? Does the insurance company have enough reserves to cover its policies if many of them go bad at the same time?  All these topical situations are characterized by asymmetric information: One party knows some information or is taking an action that is not observable to a trading partner.

While the classical economists certainly discussed information, they did not think about it systematically.  At the very least, we have to get into the nitty-gritty of how an economic agent’s allocation varies with his actions and his information to study the impact of asymmetric information.  And perfect competition with its focus on prices and quantities is not a natural paradigm for studying these kind of issues. But if we open the Pandora’s Box of production and exchange to study allocation systems broader than perfect competition, how are we even going to be able to sort through the infinite possibilities that appear?   And how are we going to determine the best way to deal with the constraints imposed by asymmetric information?

These questions were answered by the field of mechanism design to which Roger Myerson made major contributions.  If an allocation of resources is achievable by any allocation system (or mechanism), then it can be achieved by a “direct revelation game” DRG where agents are given the incentive to report their information honestly, told actions to take and then given the incentives to follow orders.  To get an agent to tell you his information, you may have to pay him “information rent”.  To get an agent to take an action, you may have to pay him a kind of bonus for performing well, “an efficiency wage”.  But these payments are unavoidable – if you have to pay them in a DRG, you have to pay them (or more!) in any other mechanism.  All this is quite abstract, but it has practical applications. Roger used these techniques to show that the kind of simple auctions we see in the real world in fact maximize expected profits for the seller in certain circumstances, even though they leave information rents to the winner.   These rents must be paid in a DRG and hence if an auction leaves exactly these rents to the buyers, the seller cannot do any better.

For this work and more, he won the Nobel Memorial Prize in Economics in 2007 with Leo Hurwicz and Eric Maskin.  Recently, Jeff mentioned that Roger and Mark Satterthwaite should get a second Nobel for the Myerson-Satterthwaite Theorem which identifies environments where it is impossible to achieve efficient allocations because agents have to be paid information rents to reveal their information honestly. This work also uses the framework and DRG I have described above.

Over time, Roger has become more of an “applied theorist”.  That is a fuzzy term that means different things to different people.  To me, it means that a researcher begins by looking at an issue in the world and writes down a model to understand it and say something interesting about it.  Roger now thinks about how to build a system of government from scratch or about the causes of the financial crisis.  How do we make sure leaders and their henchmen behave themselves and don’t try to extract more than minimum rents?  How can incentives of investment advisors generate credit cycles?

These questions are important and obviously motivated by political and economic events.  The first question belongs to “political economy” and hints at Roger’s interests in political science.  More broadly, Roger is now interested in all sorts of policy questions, in economics and domestic and foreign policy.

Jeff and I are very happy to have him as a guest blogger.  We hope he finds it easy and fun and the blog provides him with a path to get his analyses and opinions into the public domain.  We hope he becomes a permanent member of the blog.  So, if among the posts about masturbation and Charlie Sheen’s marital problems you find a post about “What should be done in Afghanistan”, you’ll know who wrote it.

Welcome Roger!

You may have heard that the Michelin guide has been bestowing many stars on Japanese restaurants.  So many that Europeans are suspecting ulterior motives.

The generous distribution of stars has prompted a snarky backlash among some Western critics and celebrity chefs, whose collective egos can be larger than acroquembouche. Some have said Michelin is showering stars upon Japan in an attempt to gain favor in a brand-conscious, France-loving country where it wants to sell not only culinary guides, but automobile tires.

Well, the Japanese aren’t so keen either.

Many Japanese chefs, especially in the Kansai region, say they never courted this attention. Even a single Michelin star can be seen as a curse by the Japanese: Their restaurants are for their customers. Why cook for a room full of strangers? Even worse: crass foreigners.

“It is, of course, a great honor to be included in the Michelin guide. But we asked them not to include us,” says Minoru Harada, an affable young Osaka chef. His Sakanadokoro Koetsu, a fish restaurant with a counter and 10 seats, just earned a single star, its first. Loyal customers have sustained the restaurant over the years, he says, adding: “If many new customers come, it is difficult.”

And with the upcoming realease of Michelin’s guide to Tokyo, Japan may soon be the most spangled restaurant nation in the world.

Last week there were numerous celebrations at Northwestern in honor of our colleague Dale Mortensen, one of the new Nobel Laureates in Economics.  There were two highlights.  First, here is Dale opening a bottle of champagne with a sword.

and here is a really lovely moment captured on voicemail at 5:30AM CDT.

(thanks for that last bit of clarification Dale! 🙂 )

Tyler Cowen explores economic ideas that should be popularized.  Let me take this opportunity to help popularize what I think is one of the pillars of economic theory and the fruit of the information economics/game theory era.

When we notice that markets or other institutions are inefficient, we need to ask compared to what?  What is the best we could possibly hope for even if we could design markets from scratch?  Myerson and Satterthwaite give the definitive answer:  even the best of all possible market designs must be inefficient:  it must leave some potential gains from trade unrealized.

If markets were perfectly efficient, whenever individual A values a good more than individual B it should be sold from B to A at a price that they find mutually agreeable.  There are many possible prices, but how do they decide on one?  The Myerson-Satterthwaite theorem says that, no matter how clever you are in designing the rules of negotiation, inevitably it will sometimes fail to converge on such a price.

The problem is one of information.  If B is going to be induced to sell to A, the price must be high enough to make B willing to part with the good.  And the more B values the good, the higher the price it must be.  That principle, which is required for market efficiency, creates an incentive problem which makes efficiency impossible.  Because now B has an incentive to hold out for a higher price by acting as if he is unwilling to part with the good.  And sometimes that price is more than A is willing to pay.

From Myerson-Satterthwaite we know what the right benchmark is for markets:  we should expect no more from them than what is consistent with these informational constraints.  It is a fundamental change in the way we think about markets and it is now part of the basic language of economics.  Indeed, in my undergraduate intermediate microeconomics course I give  simple proof of a dominant-strategy version of Myerson-Satterthwaite, you can find it here.

(Myerson won the Nobel prize jointly with Maskin and Hurwicz.  There should be a second Nobel for Myerson and Satterthwaite.)

From the latest issue of the Journal of Wine Economics, comes this paper.

The purpose of this paper is to measure the impact of Robert Parker’s oenological grades on Bordeaux wine prices. We study their impact on the so-called en primeur wine prices, i.e., the prices determined by the chaˆteau owners when the wines are still extremely young. The Parker grades are usually published in the spring of each year, before the wine prices are established. However, the wine grades attributed in 2003 have been published much later, in the autumn, after the determination of the prices. This unusual reversal is exploited to estimate a Parker effect. We find that, on average, the effect is equal to 2.80 euros per bottle of wine. We also estimate grade-specific effects, and use these estimates to predict what the prices would have been had Parker attended the spring tasting in 2003.

Note that the €2.80 number is the effect on price from having a rating at all, averaging across good ratings and bad.  You do have to buy some identifying assumptions, however.

Here he writes about underappreciated economist Eric van den Steen.  Tyler is right, Eric van den Steen is underappreciated.  His work is fresh and creative and he is venturing into terrain (heterogenous priors) where few dare to tread.  Not only that but he is drawing out credible applied ideas from there, not just philosophy. (Ran Spiegler and Kfir Eliaz are two others that come to mind with the same creativity and courage to embrace these models.)

Tyler Cowen is under-appreciated.  Not as a blogger of course, he writes the most popular blog in economics and one of the most popular blogs full stop.  It may sound strange, especially to readers of Marginal Revolution, but Tyler Cowen is an under-appreciated economist.

Here is his CV.  Here is his google scholar listing.  Here is the ranking of his economics department.  If it were not for Marginal Revolution, very few economists would know who Tyler Cowen is.

But we all read Marginal Revolution.  And we all know that Tyler is smarter, broader, more knowledgeable, more intuitive than most of us and our colleagues.  If he wanted it to, his CV could run circles around ours.  I don’t claim to know why he doesn’t want that, but I infer that Tyler believes he is innovating a new way to be a successful and influential economist without compromising on the very high standards that those of us in the old regime hold.

Public signals like Google scholar cites, and top-journal publications can’t measure his contribution to economics but we measure it privately every day when we read Marginal Revolution.  And it deserves to be made public:  Tyler Cowen is a great economist.

One doesn’t just accidentally know who Eric van den Steen is, let alone be able to summarize in a paragraph his contribution and its relation to the literature.  I barely knew who he was and its my job as a member of Northwestern’s recruiting committees to know.  For Tyler Cowen to be able to pick him out of the very many young economists and identify him as the most under-appreciated reveals that Tyler Cowen knows and reads every economist. I believe it is true.

And he understands them better than most of us.  Look at what he wrote about Dale Mortensen.  And the Mortensen-Pissarides model.  Here’s Tyler re-arranging the literature on sticky prices, trade, and monetary policy.  His piece on free parking shows a mastery of the lost art of price theory, whether or not you agree with his final conclusion.  Look at his IO reading list for crying out loud.  Finally, set aside an hour and watch him in his element speaking at Google about incentives and prizes.

So hail T-Cow!  Wunder-(not)-kind!

Which type of artist debuts with obscure experimental work, the genius or the fraud? Kim-Sau Chung and Peter Eso have a new paper which answers the question:  it’s both of these types.

Suppose that a new composer is choosing a debut project and he can try a composition in a conventional style or he can write 4’33”, the infamous John Cage composition consisting of three movements of total silence. Critics understand the conventional style well enough to assess the talent of a composer who goes that route. Nobody understands 4’33” and so the experimental composer generates no public information about his talent.

There are three types of composer.  Those that know they are talented enough to have a long career, those that know they are not talented enough and will soon drop out, and then the middle type:  those that don’t know yet whether they are talented enough and will learn more from the success of their debut.  In the Chung-Eso model, the first two types go the experimental route and only the middle type debuts with a conventional work.

The reason is intuitive.  First, the average talent of experimental artists must be higher than conventional artists. Because if it were the other way around, i.e. conventional debuts signaled talent then all types would choose a conventional debut, making it not a signal at all.  The middle types would because they want that positive signal and they want the more informative project.  The high and low types would because the positive signal is all they care about.

Then, once we see that the experimental project signals higher than average talent, we can infer that it’s the high types and the low types that go experimental.  Both of these types are willing to take the positive signal from the style of work in exchange for generating less information by the actual composition.  The middle types on the other hand are willing to forego the buzz they would generate by going experimental in return for the chance to learn about their talent.  So they debut conventionally.

Now, as the economics PhD job market approaches, which fields in economics are the experimental ones (generates buzz but nobody understands it, populated by the geniuses as well as the frauds) and which ones are conventional (easy to assess, but generally dull and signals a middling type) ?

Subsidized sterilization.

Drug addicts across the UK are being offered money to be sterilised by an American charity.

Project Prevention is offering to pay £200 to any drug user in London, Glasgow, Bristol, Leicester and parts of Wales who agrees to be operated on.

The first person in the UK to accept the cash is drug addict “John” from Leicester who says he “should never be a father”.

Probably everyone would agree that a better contract would be one that offers payment for regular use of contraception, rather than irreversible sterilization.  Sterilization is probably a “second-best” because it is easier to monitor and enforce.

But it takes sides in the addict’s conflicting preferences over time.  He is trading off money today versus children in the future.  For some, that’s what makes it the right second-best.  For others that’s what makes it exploitation.

Here is more.

Suppose I want to divide a pie between you and another person.  It is known that the other person would get value p from a fraction p of the pie (that is, each “unit” of pie is worth 1 to him), but your value is known only to you.  You value a fraction (1-p) of the pie at \theta (1-p) dollars but nobody but you knows what \theta is.

My goal is to allocate the pie efficiently.  If both of you are selfish, then this means that I would like to give all the pie to him if \theta < 1 and all the pie to you otherwise.  And if you are selfish then I can’t get you to tell me the truth about \theta.  You will always say it is larger than 1 in order to get the whole pie.

But what if you are inequity averse? Inequity aversion is a behavioral theory of preferences which is often used to explain non-selfish behavior that we see in experiments.  If you are inequity averse your utility has a kink at the point where your total pie value equals his.  When you have less than him you always like more pie both because you like pie and because you dislike the inequality.  When you have more than him you are conflicted because you like more pie but you dislike having even more than he has.

In that case, my objective is more interesting than when you are selfish.  If \theta is not too much larger than 1, then both you and he want perfect equity.  So that’s the efficient allocation.  And to achieve that, I should give you less pie than he because you get more value per unit.  And now as we consider variations in \theta, increases in \theta mean you should get even less!  This continues until \theta is so much larger than 1 that your value for more pie outweighs your aversion to inequity, and now you want the whole pie (although he still wants equity.)

And its now much easier to get you to tell me the truth.  You will always tell me the truth when your value of \theta is in the range where perfect equity is the unique efficient outcome because that way you will get exactly what you want.  Beyond that range you will again have an incentive to lie about \theta to get as much pie as possible.

So inequity aversion has a very clear implication for an experiment like this.  If the experimenter is promising always to divide the pie equitably and is asking the subject to report his value of \theta, then inequity averse subjects will do only two possible things:  tell the truth, or exaggerate their value as much as possible.  They will never understate their value.

I would be curious to see if there are any experiments like this.

 

For while O’Donnell crusaded against masturbation in the mid-1990s, denouncing it as “toying” with the organs of procreation and generally undermining baby making, the facts are to the contrary. Evidence from elephants to rodents to humans shows that masturbating is—counterintuitively—an excellent way to make healthy babies, and lots of them. No one who believes in the “family” part of family values can let her claims stand.

You will find that opening paragraph in an entertaining article in Newsweek (lid lob: linkfilter.)  It surveys a variety of stories suggesting that masturbation serves an adaptive role and was selected for by evolution.  The stories given (hygiene, signaling (??)) are mostly of the just-so variety, but this is a case where we don’t need to infer exactly the reason.  We can prove the evolutionary advantage of masturbation by a simple appeal to revealed preference.

There are lots of ways we can touch ourselves and among these, Mother Nature has revealed a very clear preference.  You cannot tickle yourself. Because the brain has a system for distinguishing between stimuli caused by others and stimuli caused by ourselves. Nature puts this system to good use:  such a huge fraction of sensory information comes from incidental contact with yourself that it has to be filtered out so that we can detect contact with others.

Mother Nature could have used this same system to put an end to masturbation once and for all:  simply detect when its us and mute the sensation. No gain, no Spain.  Instead, she made an exception in this case.  She must have had a good reason.

Tyler Cowen invites us to ponder this game:

Rejection Therapy is a real life game with one rule: to be rejected by someone every single day, for 30 days consecutive. There are even suggestion cards available for “rejection attempts” (although they are not essential to the game).

I am not sure about rejection as therapy, any more than the general principle that it is therapeutic to expose yourself to new, perhaps uncomfortable experiences all the time.

But rejection is a very simple yardstick by which to judge how often and how hard you are trying, how high you are aiming. We should push those margins as far as they can go, up to the point of negative marginal returns. We have not passed that threshold until the rejection rate is positive.

So, whether or not it is an end in itself, a daily dose of rejection is the hallmark of a life lived to the fullest.

They say you can’t compare the greats from yesteryear with the stars of today. But when it comes to Nobel laureates, to some extent you can.

The Nobel committee is just like a kid with a bag of candy.  Every day (year) he has to decide which piece of candy to eat (to whom to give the prize) and each day some new candy might be added to his bag (new candidates come on the scene.)  The twist is that each piece of candy has a random expiration date (economists randomly perish) so sometimes it is optimal to defer eating his favorite piece of candy in order to enjoy another which otherwise might go to waste.

The empirical question we are then left with is to uncover the Nobel committee’s underlying ranking of economists based on the awards actually given over time.  It’s not so simple, but there are some clear inferences we can make. (Here’s a list of Laureates up to 2006, with their ages.)

To see that it is not so simple, note that just because X got the prize and Y didn’t doesn’t mean that X is better than Y.  It could have been that the committee planned eventually to give the prize to Y but Y died earlier than expected (or Y is still alive and the time has not yet arrrived.)

When would the committee award the prize to X before Y despite ranking Y ahead of X?  A necessary condition is that Y is older than X and is therefore going to expire sooner.  (I am assuming here that age is a sufficient statistic for mortality risk.)  That gives us our one clear inference:

If X received the prize before Y and X was born later than Y then X is revealed to be better than Y.

(The specific wording is to emphasize that it is calendar age that matters, not age at the time of receiving the prize.  Also if Y never received the prize at all that counts too.)

Looking at the data, we can then infer some rankings.

One of the  first economists to win the prize, Ragnar Frisch (who??) is not revealed preferred to anybody. By contrast, Paul Samuelson, who won the very next year is revealed preferred to kuznets, hicks, leontif, von hayk, myrdal, kantorovich, koopmans, friedman, meade, ohlin, lewis, schulz, stigler, stone, allais, haavelmo, coase and vickrey.

Outdoing Samuelson is Ken Arrow, who is revealed preferred to everyone Samuelson is plus simon, klein, tobin, debreu, buchanan, north, harsanyi, schelling and hurwicz (! hurwicz won the prize 37 years later!), but minus kuznets (a total of 25!)

Also very impressive is Robert Merton who had an incredible streak of being revealed preferred to everyone winning the prize from 1998 to 2006, ended only by Maskin and Myerson (but see below.)

On the flipside, there’s Tom Schelling who is revealed to be worse than 28 other Laureates.  Leo Hurwicz is revealed to be worse than all of those plus Phelps. Hurwicz is not revealed preferred to anybody, a distinction he shares with Vickrey, Havelmo, Schultz (who??), Myrdal (?), Kuznets and Frisch.

Paul Krugman is batting 1,000 having been revealed preferred to all (two) candidates coming after him:  Williamson and Ostrom.

Similar exercises could be carried out with any prize that has a “lifetime achievement” flavor (for example Sophia Loren is revealed preferred to Sidney Poitier, natch.)

There’s a real research program here which should send decision theorists racing to their whiteboards.  We deduced one revealed preference implication. Question:  is that all we can deduce or are there other implied relations?  This is actually a family of questions that depend on how strong assumptions we want to make about the expiration dates in the candy bag.  At one extreme we could ask “is any ranking consistent with the boldface rule above rationalizable by some expiration dates known to the child but not to us?”  My conjecture is yes, i.e. that the boldface rule exhausts all we can infer.

At the other end, we might assume that the committee knows only the age of the candidates and assumes that everyone of a given age has the average mortality rate for that age (in the United States or Europe.)  This potentially makes it harder to rationalize arbitrary choices and could lead to more inferences.  This appears to be a tricky question (the infinite horizon introduces some subtleties.  Surely though Ken Arrow has already solved it but is too modest to publish it.)

Of course, the committee might have figured out that we are making inferences like this and then would leverage those to send stronger signals.  For example, giving the prize to Krugman at age 56 becomes a very strong signal.  This would add some noise.

Finally, the kid-with-a-candy-bag analogy breaks down when we notice that the committee forms bundles.  Individual rankings can still be inferred but more considerations come into play.  Maskin and Myerson got the prize very young, but Hurwicz, with whom they shared the prize, was very close to expiration. We can say that the oldest in a bundle is revealed preferred to anyone older who receives a prize later.  Plus we can infer rankings of fields by looking at the timing of prizes awarded to researchers in similar areas.  For example, time-series econometrics (2003) is revealed preferred to the theory of organizations (2009.)

The Bottom Line:  There is clear advice here for those hoping to win the prize this year, and those who actually do.  If you do win the prize, for your acceptance speech you should start by doing pushups to prove how virile you are.  This signals to the world that you were not given the award because of an impending expiration date but that in fact there was still plenty of time left but the committee still saw fit to act now.  And if you fear you will never win the prize, the sooner you expire the more willing will the public be to believe that you would have won if only you had stuck around.

If Twitter bans the sale of usernames then they take away any incentive to squat.  But is the commitment credible?

While Twitter tries to work out how to make money, a Spaniard has sold his username on the site for a six-figure sum.

In 2007 Israel Meléndez set up a Twitter account under his first name. This year he was approached by the state of Israel, which wanted to buy @Israel from him for a quantity of dollars that, he told Spain’s Público newspaper, included “five zeroes”.

The sale went through despite Twitter’s stated policy of preventing username squatting and Meléndez, who runs adult websites for a living, said Twitter itself had advised the Israeli government on how this could be done.

“All the business of getting in contact with Twitter was done by them [Israel],” Meléndez said. “I never saw any emails [between them] and Twitter never contacted me, but if the @Israel account is open and working I imagine it means that Twitter had no problem with the transaction.”

This article from Not Exactly Rocket Science discusses an experiment studying “competition” between the left and right sides of the brain. Subjects in the experiment had to pick up an object placed at different points on a table and what was observed was which hand they used depending on where the object was. The article makes this observation in passing.

they always used the nearest hand to pick up targets at the far edges of the table, but they used either hand for those near the middle. Their reaction times were slower when they had to choose which hand to use, and particularly if the target was near the centre of the table.

This much is expected, but it supports the idea that the brain is choosing between possible movements associated with each hand. At the centre of the table, when the choice is least clear, it takes longer to come down on one hand or the other.

I stopped there.  Because while this sounds intuitive, there is another intuition that points squarely in the opposite direction.  When the object is in the center of the table, that’s when it matters least which hand you use, so there is no reason to spend extra time thinking about it.  Right?  So…when you have competing intuitions you need a model.

You have to take an action, say “left” or “right” and your payoff depends on the state of the world, some number between -1 and 1.  You prefer “right” when the state is positive and “left” when the state is negative and the farther away from zero is the state, the stronger is that preference.  When the state is exactly zero you are indifferent.

You don’t know the state with perfect precision.  Instead, you initially receive a noisy signal about the state and you have to decide whether to take action right away (and which action) or wait and get a more accurate signal.  It’s costly to wait.  For what values of the initial signal do you wait?  Note that in this model, both of the competing intuitions are present.  If your initial signal is close to zero, it is likely that the true state is close to zero so your loss from choosing the wrong action is small.  Thus the gain from waiting for better information is small.  On the other hand, if your initial signal is far from zero, then the new information is unlikely to affect which action you take so again the gain from waiting is small.

But now we can compute the relative gain.  And the in-passing intuition quoted above is the winner.

Consider two possible values of the initial signal, both positive but one close to zero and one close to +1.  In either case if you don’t wait you will take action “right.”  Now consider the gain from waiting.  Take any state x and let’s consider the scenario where waiting would lead you to believe that the state is x.  If x is positive then you would still choose “right” and waiting would not gain anything.  So fix any negative x and ask what would the gain be if waiting led you to believe that the state is x.  The key observation is that for any fixed x, this gain would be the same regardless of which of the initial signals you had.

So the comparison then just boils down to comparing how likely it is to switch to x from the two different initial signals.  And this comparison depends on how far to the left x is.  Signals very close to -1 are much easier to reach from an initial signal close to zero than from an initial signal close to 1.  And these are the signals where the gain is large.  On the other hand, for x’s just to the left of zero (where the gain is small), the relative likelihood of reaching x from the two initial signals is closer to 50-50.

Formally, unless the distribution generating these signals is very strange, the distribution of payoff gains after an initial signal close to zero first-order stochastically dominates the distribution of payoff gains when you start close to 1.  So you are always more inclined to wait when your initial signal is close to zero.

For the sake of argument let’s take on the plain utilitarian case for waterboarding: in return for the suffering inflicted upon a single terror suspect we may get information that can save many more people from far greater suffering. At first glance, authorizing waterboarding simply scales up the terms of that tradeoff. The suspect suffers more and therefore he will be inclined to give more information and sooner.

But these higher stakes are not appropriate for every suspect. After all, the utilitarian cost of torture comes in large part from the possibility that this suspect may in fact have no useful information to give, he may even be innocent. When presented with a suspect whose value as an informant is uncertain, these costs are too high to use the waterboard. Something milder is preferred instead like sleep deprivation.

So the utilitarian case for authorizing waterboarding rests on the presumption that it will be held in reserve for those high-value suspects where the trade-off is favorable.

But if we look a little closer we see it’s not that simple. Torture relies on promises and not just threats. A suspect is willing to give information only if he believes that it will end or at least limit the suffering. When we authorize waterboarding, we undermine that promise because our sleep-deprived terror suspect knows that as soon as he confesses, thereby proving that he is in fact an informed terrorist, he changes the utilitarian tradeoff. Now he is exactly the kind of suspect that waterboarding is intendend for. He’s not going to confess because he knows that would make his suffering increase, not decrease.

This is an instance of what is known in the theory of dynamic mechanism design as the ratchet effect.

Taken to its logical conclusion this strategic twist means that the waterboard, once authorized, can’t ever just sit on the shelf waiting to be used on the big fish. It has to be used on every suspect. Because the only way to convince a suspect that resisting will lead to more suffering than the waterboarding he is sure to get once he concedes is to waterboard him from the very beginning.

The formal analysis is in Sandeep’s and my paper, here.

Two guys named David Pendelbury and Eugene Garfield use citation counts (combined with some other magic) to predict Nobel laureates.  They claim success:   they have“correctly predicted at least one Nobel Laureate each year with the exception of the years 1993 and 1996.”  Granted, since they are picking 5 or so people in four fields (Chemistry, Economics, Physics, Medicine) that doesn’t seem like such a big deal.

But for the record, here are the predictions for Economics.

  1. Alberto Alesina for theoretical and empirical studies on the relationship between politics and macroeconomics, and specifically for research on politico-economic cycle.
  2. Nobu Kiyotaki for formulation of the Kiyotaki-Moore model, which describes how small shocks to an economy may lead to a cycle of lower output resulting from a decline in collateral values that creates a restrictive credit environment.
  3. John Moore for formulation of the Kiyotaki-Moore model, which describes how small shocks to an economy may lead to a cycle of lower output resulting from a decline in collateral values that creates a restrictive credit environment.
  4. Kevin Murphy for pioneering empirical research in social economics, including wage inequality and labor demand, unemployment, addiction, and the economic return of investment in medical research, among other topics

David K. Levine has written a white paper for the NSF proposing that they invest in large-scale simulated economies as virtual laboratories:

An alternative method of validating theories is through the use of entirely artificial economies. To give an example, imagine a virtual world – something like Second Life, say – populated by virtual robots designed to mimic human behavior. A good theory ought to be able to predict outcomes in such a virtual world. Moreover, such an environment would offer enormous advantages: complete control – for example, over risk aversion and social preferences; independence from well-meant but irrelevant human subjects “protections”; and great speed in creating economies and validating theories. If we were to look at the physical sciences, we would see the large computer models used in testing nuclear weapons as a possible analogy. In the economic setting the great advantage of such artificial economies is the ability to deal with heterogeneity, with small frictions, and with expectations that are backward looking rather than determined in equilibrium. These are difficult or impractical to combine in existing calibrations or Monte Carlo simulations.

Tomatoes are about the only attribute these two have in common, so the choice comes down to personal preference. Heinz is spicier, with distinct Worcestershire notes. Market Pantry has mostly tomato flavor, which comes through precisely because it’s not as spicy. The flavor differences are apparent straight from the bottle or with fries.

With that conclusion, summarized briskly in workmanlike prose by journalists you’ve never heard of, Gladwell’s Grand Unifying Theory of Ketchup–which he was allowed to present in painstaking detail (and 5,000 words) in the nation’s most prestigious magazine–simply turns to air.

The background is in the Globe article.